Search (190 results, page 1 of 10)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.11
    0.11059235 = sum of:
      0.08295621 = product of:
        0.24886861 = sum of:
          0.24886861 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24886861 = score(doc=562,freq=2.0), product of:
              0.4428125 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.052230705 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.0064065247 = weight(_text_:in in 562) [ClassicSimilarity], result of:
        0.0064065247 = score(doc=562,freq=2.0), product of:
          0.07104705 = queryWeight, product of:
            1.3602545 = idf(docFreq=30841, maxDocs=44218)
            0.052230705 = queryNorm
          0.09017298 = fieldWeight in 562, product of:
            1.4142135 = tf(freq=2.0), with freq of:
              2.0 = termFreq=2.0
            1.3602545 = idf(docFreq=30841, maxDocs=44218)
            0.046875 = fieldNorm(doc=562)
      0.021229617 = product of:
        0.042459235 = sum of:
          0.042459235 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.042459235 = score(doc=562,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.08
    0.08098909 = product of:
      0.12148364 = sum of:
        0.010677542 = weight(_text_:in in 2748) [ClassicSimilarity], result of:
          0.010677542 = score(doc=2748,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.15028831 = fieldWeight in 2748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2748)
        0.1108061 = sum of:
          0.04004071 = weight(_text_:science in 2748) [ClassicSimilarity], result of:
            0.04004071 = score(doc=2748,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.2910318 = fieldWeight in 2748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.078125 = fieldNorm(doc=2748)
          0.07076539 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
            0.07076539 = score(doc=2748,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.38690117 = fieldWeight in 2748, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.078125 = fieldNorm(doc=2748)
      0.6666667 = coord(2/3)
    
    Date
    1. 2.2016 18:25:22
    Series
    Lecture notes in computer science ; 9398
  3. Dubin, D.: Dimensions and discriminability (1998) 0.07
    0.06649619 = product of:
      0.09974428 = sum of:
        0.010570227 = weight(_text_:in in 2338) [ClassicSimilarity], result of:
          0.010570227 = score(doc=2338,freq=4.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.14877784 = fieldWeight in 2338, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2338)
        0.089174055 = sum of:
          0.03963828 = weight(_text_:science in 2338) [ClassicSimilarity], result of:
            0.03963828 = score(doc=2338,freq=4.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.2881068 = fieldWeight in 2338, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2338)
          0.049535774 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
            0.049535774 = score(doc=2338,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.2708308 = fieldWeight in 2338, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2338)
      0.6666667 = coord(2/3)
    
    Abstract
    Visualization interfaces can improve subject access by highlighting the inclusion of document representation components in similarity and discrimination relationships. Within a set of retrieved documents, what kinds of groupings can index terms and subject headings make explicit? The role of controlled vocabulary in classifying search output is examined
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
    Source
    Visualizing subject access for 21st century information resources: Papers presented at the 1997 Clinic on Library Applications of Data Processing, 2-4 Mar 1997, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Ed.: P.A. Cochrane et al
  4. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.06
    0.061675217 = product of:
      0.09251282 = sum of:
        0.014948557 = weight(_text_:in in 5273) [ClassicSimilarity], result of:
          0.014948557 = score(doc=5273,freq=8.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.21040362 = fieldWeight in 5273, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5273)
        0.07756427 = sum of:
          0.028028497 = weight(_text_:science in 5273) [ClassicSimilarity], result of:
            0.028028497 = score(doc=5273,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.20372227 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
          0.049535774 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
            0.049535774 = score(doc=5273,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.2708308 = fieldWeight in 5273, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=5273)
      0.6666667 = coord(2/3)
    
    Abstract
    In text categorization tasks, classification on some class hierarchies has better results than in cases without the hierarchy. Currently, because a large number of documents are divided into several subgroups in a hierarchy, we can appropriately use a hierarchical classification method. However, we have no systematic method to build a hierarchical classification system that performs well with large collections of practical data. In this article, we introduce a new evaluation scheme for internal node classifiers, which can be used effectively to develop a hierarchical classification system. We also show that our method for constructing the hierarchical classification system is very effective, especially for the task of constructing classifiers applied to hierarchy tree with a lot of levels.
    Date
    22. 7.2006 16:24:52
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.431-442
  5. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.05
    0.05172006 = product of:
      0.07758009 = sum of:
        0.011096427 = weight(_text_:in in 2158) [ClassicSimilarity], result of:
          0.011096427 = score(doc=2158,freq=6.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.1561842 = fieldWeight in 2158, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2158)
        0.06648366 = sum of:
          0.024024425 = weight(_text_:science in 2158) [ClassicSimilarity], result of:
            0.024024425 = score(doc=2158,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.17461908 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
          0.042459235 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
            0.042459235 = score(doc=2158,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.23214069 = fieldWeight in 2158, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2158)
      0.6666667 = coord(2/3)
    
    Abstract
    This paper introduces a project to develop a reliable, cost-effective method for classifying Internet texts into register categories, and apply that approach to the analysis of a large corpus of web documents. To date, the project has proceeded in 2 key phases. First, we developed a bottom-up method for web register classification, asking end users of the web to utilize a decision-tree survey to code relevant situational characteristics of web documents, resulting in a bottom-up identification of register and subregister categories. We present details regarding the development and testing of this method through a series of 10 pilot studies. Then, in the second phase of our project we applied this procedure to a corpus of 53,000 web documents. An analysis of the results demonstrates the effectiveness of these methods for web register classification and provides a preliminary description of the types and distribution of registers on the web.
    Date
    4. 8.2015 19:22:04
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.9, S.1817-1831
  6. Liu, R.-L.: Context recognition for hierarchical text classification (2009) 0.05
    0.050362572 = product of:
      0.07554386 = sum of:
        0.009060195 = weight(_text_:in in 2760) [ClassicSimilarity], result of:
          0.009060195 = score(doc=2760,freq=4.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.12752387 = fieldWeight in 2760, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2760)
        0.06648366 = sum of:
          0.024024425 = weight(_text_:science in 2760) [ClassicSimilarity], result of:
            0.024024425 = score(doc=2760,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.17461908 = fieldWeight in 2760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=2760)
          0.042459235 = weight(_text_:22 in 2760) [ClassicSimilarity], result of:
            0.042459235 = score(doc=2760,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.23214069 = fieldWeight in 2760, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=2760)
      0.6666667 = coord(2/3)
    
    Abstract
    Information is often organized as a text hierarchy. A hierarchical text-classification system is thus essential for the management, sharing, and dissemination of information. It aims to automatically classify each incoming document into zero, one, or several categories in the text hierarchy. In this paper, we present a technique called CRHTC (context recognition for hierarchical text classification) that performs hierarchical text classification by recognizing the context of discussion (COD) of each category. A category's COD is governed by its ancestor categories, whose contents indicate contextual backgrounds of the category. A document may be classified into a category only if its content matches the category's COD. CRHTC does not require any trials to manually set parameters, and hence is more portable and easier to implement than other methods. It is empirically evaluated under various conditions. The results show that CRHTC achieves both better and more stable performance than several hierarchical and nonhierarchical text-classification methodologies.
    Date
    22. 3.2009 19:11:54
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.803-813
  7. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.05
    0.048593458 = product of:
      0.072890185 = sum of:
        0.0064065247 = weight(_text_:in in 690) [ClassicSimilarity], result of:
          0.0064065247 = score(doc=690,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.09017298 = fieldWeight in 690, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=690)
        0.06648366 = sum of:
          0.024024425 = weight(_text_:science in 690) [ClassicSimilarity], result of:
            0.024024425 = score(doc=690,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.17461908 = fieldWeight in 690, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
          0.042459235 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
            0.042459235 = score(doc=690,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.23214069 = fieldWeight in 690, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=690)
      0.6666667 = coord(2/3)
    
    Abstract
    We describe the latent semantic indexing subspace signature model (LSISSM) for semantic content representation of unstructured text. Grounded on singular value decomposition, the model represents terms and documents by the distribution signatures of their statistical contribution across the top-ranking latent concept dimensions. LSISSM matches term signatures with document signatures according to their mapping coherence between latent semantic indexing (LSI) term subspace and LSI document subspace. LSISSM does feature reduction and finds a low-rank approximation of scalable and sparse term-document matrices. Experiments demonstrate that this approach significantly improves the performance of major clustering algorithms such as standard K-means and self-organizing maps compared with the vector space model and the traditional LSI model. The unique contribution ranking mechanism in LSISSM also improves the initialization of standard K-means compared with random seeding procedure, which sometimes causes low efficiency and effectiveness of clustering. A two-stage initialization strategy based on LSISSM significantly reduces the running time of standard K-means procedures.
    Date
    23. 3.2013 13:22:36
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.4, S.844-860
  8. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.05
    0.045653544 = product of:
      0.06848031 = sum of:
        0.013077264 = weight(_text_:in in 2765) [ClassicSimilarity], result of:
          0.013077264 = score(doc=2765,freq=12.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.18406484 = fieldWeight in 2765, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2765)
        0.05540305 = sum of:
          0.020020355 = weight(_text_:science in 2765) [ClassicSimilarity], result of:
            0.020020355 = score(doc=2765,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.1455159 = fieldWeight in 2765, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
          0.035382695 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
            0.035382695 = score(doc=2765,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.19345059 = fieldWeight in 2765, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2765)
      0.6666667 = coord(2/3)
    
    Abstract
    Passages can be hidden within a text to circumvent their disallowed transfer. Such release of compartmentalized information is of concern to all corporate and governmental organizations. Passage retrieval is well studied; we posit, however, that passage detection is not. Passage retrieval is the determination of the degree of relevance of blocks of text, namely passages, comprising a document. Rather than determining the relevance of a document in its entirety, passage retrieval determines the relevance of the individual passages. As such, modified traditional information-retrieval techniques compare terms found in user queries with the individual passages to determine a similarity score for passages of interest. In passage detection, passages are classified into predetermined categories. More often than not, passage detection techniques are deployed to detect hidden paragraphs in documents. That is, to hide information, documents are injected with hidden text into passages. Rather than matching query terms against passages to determine their relevance, using text-mining techniques, the passages are classified. Those documents with hidden passages are defined as infected. Thus, simply stated, passage retrieval is the search for passages relevant to a user query, while passage detection is the classification of passages. That is, in passage detection, passages are labeled with one or more categories from a set of predetermined categories. We present a keyword-based dynamic passage approach (KDP) and demonstrate that KDP outperforms statistically significantly (99% confidence) the other document-splitting approaches by 12% to 18% in the passage detection and passage category-prediction tasks. Furthermore, we evaluate the effects of the feature selection, passage length, ambiguous passages, and finally training-data category distribution on passage-detection accuracy.
    Date
    22. 3.2009 19:14:43
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.4, S.814-825
  9. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.04
    0.044893935 = product of:
      0.0673409 = sum of:
        0.011937855 = weight(_text_:in in 1107) [ClassicSimilarity], result of:
          0.011937855 = score(doc=1107,freq=10.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.16802745 = fieldWeight in 1107, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.05540305 = sum of:
          0.020020355 = weight(_text_:science in 1107) [ClassicSimilarity], result of:
            0.020020355 = score(doc=1107,freq=2.0), product of:
              0.1375819 = queryWeight, product of:
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.052230705 = queryNorm
              0.1455159 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.6341193 = idf(docFreq=8627, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
          0.035382695 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
            0.035382695 = score(doc=1107,freq=2.0), product of:
              0.18290302 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052230705 = queryNorm
              0.19345059 = fieldWeight in 1107, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1107)
      0.6666667 = coord(2/3)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.11, S.2265-2277
  10. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.03
    0.030706827 = product of:
      0.046060238 = sum of:
        0.010677542 = weight(_text_:in in 611) [ClassicSimilarity], result of:
          0.010677542 = score(doc=611,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.15028831 = fieldWeight in 611, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=611)
        0.035382695 = product of:
          0.07076539 = sum of:
            0.07076539 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.07076539 = score(doc=611,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Präsentation zum Vortrag anlässlich des 98. Deutscher Bibliothekartag in Erfurt: Ein neuer Blick auf Bibliotheken; TK10: Information erschließen und recherchieren Inhalte erschließen - mit neuen Tools
    Date
    22. 8.2009 12:54:24
  11. Fang, H.: Classifying research articles in multidisciplinary sciences journals into subject categories (2015) 0.03
    0.02515137 = product of:
      0.037727054 = sum of:
        0.0177067 = weight(_text_:in in 2194) [ClassicSimilarity], result of:
          0.0177067 = score(doc=2194,freq=22.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.24922498 = fieldWeight in 2194, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2194)
        0.020020355 = product of:
          0.04004071 = sum of:
            0.04004071 = weight(_text_:science in 2194) [ClassicSimilarity], result of:
              0.04004071 = score(doc=2194,freq=8.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.2910318 = fieldWeight in 2194, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2194)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In the Thomson Reuters Web of Science database, the subject categories of a journal are applied to all articles in the journal. However, many articles in multidisciplinary Sciences journals may only be represented by a small number of subject categories. To provide more accurate information on the research areas of articles in such journals, we can classify articles in these journals into subject categories as defined by Web of Science based on their references. For an article in a multidisciplinary sciences journal, the method counts the subject categories in all of the article's references indexed by Web of Science, and uses the most numerous subject categories of the references to determine the most appropriate classification of the article. We used articles in an issue of Proceedings of the National Academy of Sciences (PNAS) to validate the correctness of the method by comparing the obtained results with the categories of the articles as defined by PNAS and their content. This study shows that the method provides more precise search results for the subject category of interest in bibliometric investigations through recognition of articles in multidisciplinary sciences journals whose work relates to a particular subject category.
    Object
    Web of science
  12. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.02
    0.023558743 = product of:
      0.035338115 = sum of:
        0.010570227 = weight(_text_:in in 1673) [ClassicSimilarity], result of:
          0.010570227 = score(doc=1673,freq=4.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.14877784 = fieldWeight in 1673, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1673)
        0.024767887 = product of:
          0.049535774 = sum of:
            0.049535774 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.049535774 = score(doc=1673,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The Wolverhampton Web Library (WWLib) is a WWW search engine that provides access to UK based information. The experimental version developed in 1995, was a success but highlighted the need for a much higher degree of automation. An interesting feature of the experimental WWLib was that it organised information according to DDC. Discusses the advantages of classification and describes the automatic classifier that is being developed in Java as part of the new, fully automated WWLib
    Date
    1. 8.1996 22:08:06
  13. Zhang, X: Rough set theory based automatic text categorization (2005) 0.02
    0.023153804 = product of:
      0.034730706 = sum of:
        0.01208026 = weight(_text_:in in 2822) [ClassicSimilarity], result of:
          0.01208026 = score(doc=2822,freq=4.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.17003182 = fieldWeight in 2822, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2822)
        0.022650447 = product of:
          0.045300893 = sum of:
            0.045300893 = weight(_text_:science in 2822) [ClassicSimilarity], result of:
              0.045300893 = score(doc=2822,freq=4.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.3292649 = fieldWeight in 2822, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2822)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Der Forschungsbericht "Rough Set Theory Based Automatic Text Categorization and the Handling of Semantic Heterogeneity" von Xueying Zhang ist in Buchform auf Englisch erschienen. Zhang hat in ihrer Arbeit ein Verfahren basierend auf der Rough Set Theory entwickelt, das Beziehungen zwischen Schlagwörtern verschiedener Vokabulare herstellt. Sie war von 2003 bis 2005 Mitarbeiterin des IZ und ist seit Oktober 2005 Associate Professor an der Nanjing University of Science and Technology.
    Footnote
    Nanjing University of Science and Technology, Diss.
  14. Pfeffer, M.: Automatische Vergabe von RVK-Notationen mittels fallbasiertem Schließen (2009) 0.02
    0.022695113 = product of:
      0.034042668 = sum of:
        0.012813049 = weight(_text_:in in 3051) [ClassicSimilarity], result of:
          0.012813049 = score(doc=3051,freq=8.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.18034597 = fieldWeight in 3051, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3051)
        0.021229617 = product of:
          0.042459235 = sum of:
            0.042459235 = weight(_text_:22 in 3051) [ClassicSimilarity], result of:
              0.042459235 = score(doc=3051,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.23214069 = fieldWeight in 3051, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3051)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Klassifikation von bibliografischen Einheiten ist für einen systematischen Zugang zu den Beständen einer Bibliothek und deren Aufstellung unumgänglich. Bislang wurde diese Aufgabe von Fachexperten manuell erledigt, sei es individuell nach einer selbst entwickelten Systematik oder kooperativ nach einer gemeinsamen Systematik. In dieser Arbeit wird ein Verfahren zur Automatisierung des Klassifikationsvorgangs vorgestellt. Dabei kommt das Verfahren des fallbasierten Schließens zum Einsatz, das im Kontext der Forschung zur künstlichen Intelligenz entwickelt wurde. Das Verfahren liefert für jedes Werk, für das bibliografische Daten vorliegen, eine oder mehrere mögliche Klassifikationen. In Experimenten werden die Ergebnisse der automatischen Klassifikation mit der durch Fachexperten verglichen. Diese Experimente belegen die hohe Qualität der automatischen Klassifikation und dass das Verfahren geeignet ist, Fachexperten bei der Klassifikationsarbeit signifikant zu entlasten. Auch die nahezu vollständige Resystematisierung eines Bibliothekskataloges ist - mit gewissen Abstrichen - möglich.
    Date
    22. 8.2009 19:51:28
    Source
    Wissen bewegen - Bibliotheken in der Informationsgesellschaft / 97. Deutscher Bibliothekartag in Mannheim, 2008. Hrsg. von Ulrich Hohoff und Per Knudsen. Bearb. von Stefan Siebert
  15. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.02
    0.021494776 = product of:
      0.032242164 = sum of:
        0.0074742786 = weight(_text_:in in 141) [ClassicSimilarity], result of:
          0.0074742786 = score(doc=141,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.10520181 = fieldWeight in 141, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=141)
        0.024767887 = product of:
          0.049535774 = sum of:
            0.049535774 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.049535774 = score(doc=141,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Aufgabe der Datenanalyse ist es, Daten zu ordnen, übersichtlich darzustellen, verborgene und natürlich Strukturen zu entdecken, die diesbezüglich wesentlichen Eigenschaften herauszukristallisieren und zweckmäßige Modelle zur Beschreibung von Daten aufzustellen. Es wird ein Einblick in die Methoden und Prinzipien der Datenanalyse vermittelt. Anhand typischer Beispiele wird gezeigt, welche Daten analysiert, welche Strukturen betrachtet, welche Darstellungs- bzw. Ordnungsmethoden verwendet, welche Zielsetzungen verfolgt und welche Bewertungskriterien dabei angewendet werden können. Diskutiert wird auch die angemessene Verwendung der unterschiedlichen Methoden, wobei auf die gefahr und Art von Fehlinterpretationen hingewiesen wird
    Pages
    S.1-22
  16. Automatic classification research at OCLC (2002) 0.02
    0.021494776 = product of:
      0.032242164 = sum of:
        0.0074742786 = weight(_text_:in in 1563) [ClassicSimilarity], result of:
          0.0074742786 = score(doc=1563,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.10520181 = fieldWeight in 1563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1563)
        0.024767887 = product of:
          0.049535774 = sum of:
            0.049535774 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.049535774 = score(doc=1563,freq=2.0), product of:
                0.18290302 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052230705 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    OCLC enlists the cooperation of the world's libraries to make the written record of humankind's cultural heritage more accessible through electronic media. Part of this goal can be accomplished through the application of the principles of knowledge organization. We believe that cultural artifacts are effectively lost unless they are indexed, cataloged and classified. Accordingly, OCLC has developed products, sponsored research projects, and encouraged the participation in international standards communities whose outcome has been improved library classification schemes, cataloging productivity tools, and new proposals for the creation and maintenance of metadata. Though cataloging and classification requires expert intellectual effort, we recognize that at least some of the work must be automated if we hope to keep pace with cultural change
    Date
    5. 5.2003 9:22:09
  17. Barbu, E.: What kind of knowledge is in Wikipedia? : unsupervised extraction of properties for similar concepts (2014) 0.02
    0.020821191 = product of:
      0.031231787 = sum of:
        0.019219575 = weight(_text_:in in 1547) [ClassicSimilarity], result of:
          0.019219575 = score(doc=1547,freq=18.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.27051896 = fieldWeight in 1547, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1547)
        0.012012213 = product of:
          0.024024425 = sum of:
            0.024024425 = weight(_text_:science in 1547) [ClassicSimilarity], result of:
              0.024024425 = score(doc=1547,freq=2.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.17461908 = fieldWeight in 1547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1547)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article presents a novel method for extracting knowledge from Wikipedia and a classification schema for annotating the extracted knowledge. Unlike the majority of approaches in the literature, we use the raw Wikipedia text for knowledge acquisition. The main assumption made is that the concepts classified under the same node in a taxonomy are described in a comparable way in Wikipedia. The annotation of the extracted knowledge is done at two levels: ontological and logical. The extracted properties are evaluated in the traditional way, that is, by computing the precision of the extraction procedure and in a clustering task. The second method of evaluation is seldom used in the natural language processing community, but it is regularly employed in cognitive psychology.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.12, S.2489-2497
  18. Losee, R.M.; Haas, S.W.: Sublanguage terms : dictionaries, usage, and automatic classification (1995) 0.02
    0.020794988 = product of:
      0.03119248 = sum of:
        0.008542033 = weight(_text_:in in 2650) [ClassicSimilarity], result of:
          0.008542033 = score(doc=2650,freq=2.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.120230645 = fieldWeight in 2650, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=2650)
        0.022650447 = product of:
          0.045300893 = sum of:
            0.045300893 = weight(_text_:science in 2650) [ClassicSimilarity], result of:
              0.045300893 = score(doc=2650,freq=4.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.3292649 = fieldWeight in 2650, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2650)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The use of terms from natural and social science titles and abstracts is studied from the perspective of sublanguages and their specialized dictionaries. Explores different notions of sublanguage distinctiveness. Object methods for separating hard and soft sciences are suggested based on measures of sublanguage use, dictionary characteristics, and sublanguage distinctiveness. Abstracts were automatically classified with a high degree of accuracy by using a formula that condsiders the degree of uniqueness of terms in each sublanguage. This may prove useful for text filtering of information retrieval systems
    Source
    Journal of the American Society for Information Science. 46(1995) no.7, S.519-529
  19. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.02
    0.019867256 = product of:
      0.029800884 = sum of:
        0.012813049 = weight(_text_:in in 3464) [ClassicSimilarity], result of:
          0.012813049 = score(doc=3464,freq=8.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.18034597 = fieldWeight in 3464, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3464)
        0.016987834 = product of:
          0.03397567 = sum of:
            0.03397567 = weight(_text_:science in 3464) [ClassicSimilarity], result of:
              0.03397567 = score(doc=3464,freq=4.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.24694869 = fieldWeight in 3464, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We propose a new hybrid clustering framework to incorporate text mining with bibliometrics in journal set analysis. The framework integrates two different approaches: clustering ensemble and kernel-fusion clustering. To improve the flexibility and the efficiency of processing large-scale data, we propose an information-based weighting scheme to leverage the effect of multiple data sources in hybrid clustering. Three different algorithms are extended by the proposed weighting scheme and they are employed on a large journal set retrieved from the Web of Science (WoS) database. The clustering performance of the proposed algorithms is systematically evaluated using multiple evaluation methods, and they were cross-compared with alternative methods. Experimental results demonstrate that the proposed weighted hybrid clustering strategy is superior to other methods in clustering performance and efficiency. The proposed approach also provides a more refined structural mapping of journal sets, which is useful for monitoring and detecting new trends in different scientific fields.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1105-1119
  20. Golub, K.: Automated subject classification of textual web documents (2006) 0.02
    0.019517329 = product of:
      0.029275991 = sum of:
        0.011937855 = weight(_text_:in in 5600) [ClassicSimilarity], result of:
          0.011937855 = score(doc=5600,freq=10.0), product of:
            0.07104705 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.052230705 = queryNorm
            0.16802745 = fieldWeight in 5600, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5600)
        0.017338136 = product of:
          0.034676272 = sum of:
            0.034676272 = weight(_text_:science in 5600) [ClassicSimilarity], result of:
              0.034676272 = score(doc=5600,freq=6.0), product of:
                0.1375819 = queryWeight, product of:
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.052230705 = queryNorm
                0.25204095 = fieldWeight in 5600, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.6341193 = idf(docFreq=8627, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5600)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - To provide an integrated perspective to similarities and differences between approaches to automated classification in different research communities (machine learning, information retrieval and library science), and point to problems with the approaches and automated classification as such. Design/methodology/approach - A range of works dealing with automated classification of full-text web documents are discussed. Explorations of individual approaches are given in the following sections: special features (description, differences, evaluation), application and characteristics of web pages. Findings - Provides major similarities and differences between the three approaches: document pre-processing and utilization of web-specific document characteristics is common to all the approaches; major differences are in applied algorithms, employment or not of the vector space model and of controlled vocabularies. Problems of automated classification are recognized. Research limitations/implications - The paper does not attempt to provide an exhaustive bibliography of related resources. Practical implications - As an integrated overview of approaches from different research communities with application examples, it is very useful for students in library and information science and computer science, as well as for practitioners. Researchers from one community have the information on how similar tasks are conducted in different communities. Originality/value - To the author's knowledge, no review paper on automated text classification attempted to discuss more than one community's approach from an integrated perspective.

Years

Languages

  • e 156
  • d 32
  • a 1
  • chi 1
  • More… Less…

Types

  • a 159
  • el 25
  • x 6
  • m 4
  • r 3
  • s 2
  • d 1
  • More… Less…