Search (42 results, page 1 of 3)

  • × language_ss:"e"
  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.06
    0.064629115 = sum of:
      0.052651923 = product of:
        0.2106077 = sum of:
          0.2106077 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.2106077 = score(doc=562,freq=2.0), product of:
              0.37473476 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.044200785 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.011977192 = product of:
        0.035931576 = sum of:
          0.035931576 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.035931576 = score(doc=562,freq=2.0), product of:
              0.15478362 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044200785 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.06
    0.057661634 = product of:
      0.11532327 = sum of:
        0.11532327 = product of:
          0.1729849 = sum of:
            0.11309893 = weight(_text_:y in 2748) [ClassicSimilarity], result of:
              0.11309893 = score(doc=2748,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.53170013 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
            0.059885964 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.059885964 = score(doc=2748,freq=2.0), product of:
                0.15478362 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044200785 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  3. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.04
    0.040363144 = product of:
      0.08072629 = sum of:
        0.08072629 = product of:
          0.12108943 = sum of:
            0.07916925 = weight(_text_:y in 5273) [ClassicSimilarity], result of:
              0.07916925 = score(doc=5273,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3721901 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
            0.041920174 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.041920174 = score(doc=5273,freq=2.0), product of:
                0.15478362 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044200785 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:24:52
  4. Mengle, S.; Goharian, N.: Passage detection using text classification (2009) 0.03
    0.025112148 = product of:
      0.050224297 = sum of:
        0.050224297 = product of:
          0.07533644 = sum of:
            0.045393463 = weight(_text_:n in 2765) [ClassicSimilarity], result of:
              0.045393463 = score(doc=2765,freq=2.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.23818761 = fieldWeight in 2765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2765)
            0.029942982 = weight(_text_:22 in 2765) [ClassicSimilarity], result of:
              0.029942982 = score(doc=2765,freq=2.0), product of:
                0.15478362 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044200785 = queryNorm
                0.19345059 = fieldWeight in 2765, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2765)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    22. 3.2009 19:14:43
  5. Yu, W.; Gong, Y.: Document clustering by concept factorization (2004) 0.02
    0.022619788 = product of:
      0.045239575 = sum of:
        0.045239575 = product of:
          0.13571872 = sum of:
            0.13571872 = weight(_text_:y in 4084) [ClassicSimilarity], result of:
              0.13571872 = score(doc=4084,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.6380402 = fieldWeight in 4084, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4084)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  6. Shen, D.; Chen, Z.; Yang, Q.; Zeng, H.J.; Zhang, B.; Lu, Y.; Ma, W.Y.: Web page classification through summarization (2004) 0.02
    0.018849822 = product of:
      0.037699644 = sum of:
        0.037699644 = product of:
          0.11309893 = sum of:
            0.11309893 = weight(_text_:y in 4132) [ClassicSimilarity], result of:
              0.11309893 = score(doc=4132,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.53170013 = fieldWeight in 4132, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4132)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  7. Chung, Y.-M.; Noh, Y.-H.: Developing a specialized directory system by automatically classifying Web documents (2003) 0.02
    0.015994605 = product of:
      0.03198921 = sum of:
        0.03198921 = product of:
          0.09596762 = sum of:
            0.09596762 = weight(_text_:y in 1566) [ClassicSimilarity], result of:
              0.09596762 = score(doc=1566,freq=4.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.45116252 = fieldWeight in 1566, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1566)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  8. Chung, Y.M.; Lee, J.Y.: ¬A corpus-based approach to comparative evaluation of statistical term association measures (2001) 0.01
    0.013328837 = product of:
      0.026657674 = sum of:
        0.026657674 = product of:
          0.07997302 = sum of:
            0.07997302 = weight(_text_:y in 5769) [ClassicSimilarity], result of:
              0.07997302 = score(doc=5769,freq=4.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.37596878 = fieldWeight in 5769, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5769)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Statistical association measures have been widely applied in information retrieval research, usually employing a clustering of documents or terms on the basis of their relationships. Applications of the association measures for term clustering include automatic thesaurus construction and query expansion. This research evaluates the similarity of six association measures by comparing the relationship and behavior they demonstrate in various analyses of a test corpus. Analysis techniques include comparisons of highly ranked term pairs and term clusters, analyses of the correlation among the association measures using Pearson's correlation coefficient and MDS mapping, and an analysis of the impact of a term frequency on the association values by means of z-score. The major findings of the study are as follows: First, the most similar association measures are mutual information and Yule's coefficient of colligation Y, whereas cosine and Jaccard coefficients, as well as X**2 statistic and likelihood ratio, demonstrate quite similar behavior for terms with high frequency. Second, among all the measures, the X**2 statistic is the least affected by the frequency of terms. Third, although cosine and Jaccard coefficients tend to emphasize high frequency terms, mutual information and Yule's Y seem to overestimate rare terms
  9. Yang, Y.; Liu, X.: ¬A re-examination of text categorization methods (1999) 0.01
    0.013194876 = product of:
      0.026389752 = sum of:
        0.026389752 = product of:
          0.07916925 = sum of:
            0.07916925 = weight(_text_:y in 3386) [ClassicSimilarity], result of:
              0.07916925 = score(doc=3386,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3721901 = fieldWeight in 3386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3386)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  10. Cathey, R.J.; Jensen, E.C.; Beitzel, S.M.; Frieder, O.; Grossman, D.: Exploiting parallelism to support scalable hierarchical clustering (2007) 0.01
    0.013103966 = product of:
      0.026207931 = sum of:
        0.026207931 = product of:
          0.078623794 = sum of:
            0.078623794 = weight(_text_:n in 448) [ClassicSimilarity], result of:
              0.078623794 = score(doc=448,freq=6.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.41255307 = fieldWeight in 448, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=448)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    A distributed memory parallel version of the group average hierarchical agglomerative clustering algorithm is proposed to enable scaling the document clustering problem to large collections. Using standard message passing operations reduces interprocess communication while maintaining efficient load balancing. In a series of experiments using a subset of a standard Text REtrieval Conference (TREC) test collection, our parallel hierarchical clustering algorithm is shown to be scalable in terms of processors efficiently used and the collection size. Results show that our algorithm performs close to the expected O(n**2/p) time on p processors rather than the worst-case O(n**3/p) time. Furthermore, the O(n**2/p) memory complexity per node allows larger collections to be clustered as the number of nodes increases. While partitioning algorithms such as k-means are trivially parallelizable, our results confirm those of other studies which showed that hierarchical algorithms produce significantly tighter clusters in the document clustering task. Finally, we show how our parallel hierarchical agglomerative clustering algorithm can be used as the clustering subroutine for a parallel version of the buckshot algorithm to cluster the complete TREC collection at near theoretical runtime expectations.
  11. Choi, B.; Peng, X.: Dynamic and hierarchical classification of Web pages (2004) 0.01
    0.012839211 = product of:
      0.025678422 = sum of:
        0.025678422 = product of:
          0.07703526 = sum of:
            0.07703526 = weight(_text_:n in 2555) [ClassicSimilarity], result of:
              0.07703526 = score(doc=2555,freq=4.0), product of:
                0.19057861 = queryWeight, product of:
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.044200785 = queryNorm
                0.40421778 = fieldWeight in 2555, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.3116565 = idf(docFreq=1611, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2555)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Automatic classification of Web pages is an effective way to organise the vast amount of information and to assist in retrieving relevant information from the Internet. Although many automatic classification systems have been proposed, most of them ignore the conflict between the fixed number of categories and the growing number of Web pages being added into the systems. They also require searching through all existing categories to make any classification. This article proposes a dynamic and hierarchical classification system that is capable of adding new categories as required, organising the Web pages into a tree structure, and classifying Web pages by searching through only one path of the tree. The proposed single-path search technique reduces the search complexity from (n) to (log(n)). Test results show that the system improves the accuracy of classification by 6 percent in comparison to related systems. The dynamic-category expansion technique also achieves satisfying results for adding new categories into the system as required.
  12. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.01
    0.011977192 = product of:
      0.023954384 = sum of:
        0.023954384 = product of:
          0.07186315 = sum of:
            0.07186315 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.07186315 = score(doc=1046,freq=2.0), product of:
                0.15478362 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044200785 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 14:17:22
  13. Wu, K.J.; Chen, M.-C.; Sun, Y.: Automatic topics discovery from hyperlinked documents (2004) 0.01
    0.011309894 = product of:
      0.022619788 = sum of:
        0.022619788 = product of:
          0.06785936 = sum of:
            0.06785936 = weight(_text_:y in 2563) [ClassicSimilarity], result of:
              0.06785936 = score(doc=2563,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3190201 = fieldWeight in 2563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2563)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  14. Yoon, Y.; Lee, G.G.: Efficient implementation of associative classifiers for document classification (2007) 0.01
    0.011309894 = product of:
      0.022619788 = sum of:
        0.022619788 = product of:
          0.06785936 = sum of:
            0.06785936 = weight(_text_:y in 909) [ClassicSimilarity], result of:
              0.06785936 = score(doc=909,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3190201 = fieldWeight in 909, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=909)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  15. Ko, Y.; Seo, J.: Text classification from unlabeled documents with bootstrapping and feature projection techniques (2009) 0.01
    0.011309894 = product of:
      0.022619788 = sum of:
        0.022619788 = product of:
          0.06785936 = sum of:
            0.06785936 = weight(_text_:y in 2452) [ClassicSimilarity], result of:
              0.06785936 = score(doc=2452,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3190201 = fieldWeight in 2452, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2452)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  16. Xu, Y.; Bernard, A.: Knowledge organization through statistical computation : a new approach (2009) 0.01
    0.011309894 = product of:
      0.022619788 = sum of:
        0.022619788 = product of:
          0.06785936 = sum of:
            0.06785936 = weight(_text_:y in 3252) [ClassicSimilarity], result of:
              0.06785936 = score(doc=3252,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3190201 = fieldWeight in 3252, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3252)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  17. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.01
    0.011309894 = product of:
      0.022619788 = sum of:
        0.022619788 = product of:
          0.06785936 = sum of:
            0.06785936 = weight(_text_:y in 3464) [ClassicSimilarity], result of:
              0.06785936 = score(doc=3464,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3190201 = fieldWeight in 3464, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  18. Aphinyanaphongs, Y.; Fu, L.D.; Li, Z.; Peskin, E.R.; Efstathiadis, E.; Aliferis, C.F.; Statnikov, A.: ¬A comprehensive empirical comparison of modern supervised classification and feature selection methods for text categorization (2014) 0.01
    0.011309894 = product of:
      0.022619788 = sum of:
        0.022619788 = product of:
          0.06785936 = sum of:
            0.06785936 = weight(_text_:y in 1496) [ClassicSimilarity], result of:
              0.06785936 = score(doc=1496,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3190201 = fieldWeight in 1496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1496)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  19. Ko, Y.: ¬A new term-weighting scheme for text classification using the odds of positive and negative class probabilities (2015) 0.01
    0.011309894 = product of:
      0.022619788 = sum of:
        0.022619788 = product of:
          0.06785936 = sum of:
            0.06785936 = weight(_text_:y in 2339) [ClassicSimilarity], result of:
              0.06785936 = score(doc=2339,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3190201 = fieldWeight in 2339, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2339)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  20. Wu, M.; Liu, Y.-H.; Brownlee, R.; Zhang, X.: Evaluating utility and automatic classification of subject metadata from Research Data Australia (2021) 0.01
    0.011309894 = product of:
      0.022619788 = sum of:
        0.022619788 = product of:
          0.06785936 = sum of:
            0.06785936 = weight(_text_:y in 453) [ClassicSimilarity], result of:
              0.06785936 = score(doc=453,freq=2.0), product of:
                0.21271187 = queryWeight, product of:
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.044200785 = queryNorm
                0.3190201 = fieldWeight in 453, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8124003 = idf(docFreq=976, maxDocs=44218)
                  0.046875 = fieldNorm(doc=453)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    

Years

Types

  • a 40
  • el 2
  • More… Less…