Search (7 results, page 1 of 1)

  • × theme_ss:"Automatisches Klassifizieren"
  • × year_i:[2010 TO 2020}
  1. Liu, R.-L.: ¬A passage extractor for classification of disease aspect information (2013) 0.01
    0.0144263785 = product of:
      0.043279134 = sum of:
        0.029076494 = weight(_text_:b in 1107) [ClassicSimilarity], result of:
          0.029076494 = score(doc=1107,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.19572285 = fieldWeight in 1107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1107)
        0.014202639 = product of:
          0.028405279 = sum of:
            0.028405279 = weight(_text_:22 in 1107) [ClassicSimilarity], result of:
              0.028405279 = score(doc=1107,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.19345059 = fieldWeight in 1107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1107)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Retrieval of disease information is often based on several key aspects such as etiology, diagnosis, treatment, prevention, and symptoms of diseases. Automatic identification of disease aspect information is thus essential. In this article, I model the aspect identification problem as a text classification (TC) problem in which a disease aspect corresponds to a category. The disease aspect classification problem poses two challenges to classifiers: (a) a medical text often contains information about multiple aspects of a disease and hence produces noise for the classifiers and (b) text classifiers often cannot extract the textual parts (i.e., passages) about the categories of interest. I thus develop a technique, PETC (Passage Extractor for Text Classification), that extracts passages (from medical texts) for the underlying text classifiers to classify. Case studies on thousands of Chinese and English medical texts show that PETC enhances a support vector machine (SVM) classifier in classifying disease aspect information. PETC also performs better than three state-of-the-art classifier enhancement techniques, including two passage extraction techniques for text classifiers and a technique that employs term proximity information to enhance text classifiers. The contribution is of significance to evidence-based medicine, health education, and healthcare decision support. PETC can be used in those application domains in which a text to be classified may have several parts about different categories.
    Date
    28.10.2013 19:22:57
  2. Qu, B.; Cong, G.; Li, C.; Sun, A.; Chen, H.: ¬An evaluation of classification models for question topic categorization (2012) 0.01
    0.006853395 = product of:
      0.04112037 = sum of:
        0.04112037 = weight(_text_:b in 237) [ClassicSimilarity], result of:
          0.04112037 = score(doc=237,freq=4.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.2767939 = fieldWeight in 237, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.0390625 = fieldNorm(doc=237)
      0.16666667 = coord(1/6)
    
    Abstract
    We study the problem of question topic classification using a very large real-world Community Question Answering (CQA) dataset from Yahoo! Answers. The dataset comprises 3.9 million questions and these questions are organized into more than 1,000 categories in a hierarchy. To the best knowledge, this is the first systematic evaluation of the performance of different classification methods on question topic classification as well as short texts. Specifically, we empirically evaluate the following in classifying questions into CQA categories: (a) the usefulness of n-gram features and bag-of-word features; (b) the performance of three standard classification algorithms (naive Bayes, maximum entropy, and support vector machines); (c) the performance of the state-of-the-art hierarchical classification algorithms; (d) the effect of training data size on performance; and (e) the effectiveness of the different components of CQA data, including subject, content, asker, and the best answer. The experimental results show what aspects are important for question topic classification in terms of both effectiveness and efficiency. We believe that the experimental findings from this study will be useful in real-world classification problems.
  3. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.00
    0.0047342135 = product of:
      0.028405279 = sum of:
        0.028405279 = product of:
          0.056810558 = sum of:
            0.056810558 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.056810558 = score(doc=2748,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    1. 2.2016 18:25:22
  4. Billal, B.; Fonseca, A.; Sadat, F.; Lounis, H.: Semi-supervised learning and social media text analysis towards multi-labeling categorization (2017) 0.00
    0.0038768656 = product of:
      0.023261193 = sum of:
        0.023261193 = weight(_text_:b in 4095) [ClassicSimilarity], result of:
          0.023261193 = score(doc=4095,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.15657827 = fieldWeight in 4095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03125 = fieldNorm(doc=4095)
      0.16666667 = coord(1/6)
    
  5. Altinel, B.; Ganiz, M.C.: Semantic text classification : a survey of past and recent advances (2018) 0.00
    0.0038768656 = product of:
      0.023261193 = sum of:
        0.023261193 = weight(_text_:b in 5051) [ClassicSimilarity], result of:
          0.023261193 = score(doc=5051,freq=2.0), product of:
            0.14855953 = queryWeight, product of:
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.041930884 = queryNorm
            0.15657827 = fieldWeight in 5051, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.542962 = idf(docFreq=3476, maxDocs=44218)
              0.03125 = fieldNorm(doc=5051)
      0.16666667 = coord(1/6)
    
  6. Zhu, W.Z.; Allen, R.B.: Document clustering using the LSI subspace signature model (2013) 0.00
    0.0028405278 = product of:
      0.017043166 = sum of:
        0.017043166 = product of:
          0.03408633 = sum of:
            0.03408633 = weight(_text_:22 in 690) [ClassicSimilarity], result of:
              0.03408633 = score(doc=690,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.23214069 = fieldWeight in 690, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=690)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    23. 3.2013 13:22:36
  7. Egbert, J.; Biber, D.; Davies, M.: Developing a bottom-up, user-based method of web register classification (2015) 0.00
    0.0028405278 = product of:
      0.017043166 = sum of:
        0.017043166 = product of:
          0.03408633 = sum of:
            0.03408633 = weight(_text_:22 in 2158) [ClassicSimilarity], result of:
              0.03408633 = score(doc=2158,freq=2.0), product of:
                0.1468348 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041930884 = queryNorm
                0.23214069 = fieldWeight in 2158, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2158)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    4. 8.2015 19:22:04