Search (35 results, page 1 of 2)

  • × theme_ss:"Automatisches Klassifizieren"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.08075899 = sum of:
      0.060213163 = product of:
        0.24085265 = sum of:
          0.24085265 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.24085265 = score(doc=562,freq=2.0), product of:
              0.42854968 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.050548375 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.020545822 = product of:
        0.041091643 = sum of:
          0.041091643 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.041091643 = score(doc=562,freq=2.0), product of:
              0.1770118 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050548375 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Subramanian, S.; Shafer, K.E.: Clustering (2001) 0.02
    0.020545822 = product of:
      0.041091643 = sum of:
        0.041091643 = product of:
          0.08218329 = sum of:
            0.08218329 = weight(_text_:22 in 1046) [ClassicSimilarity], result of:
              0.08218329 = score(doc=1046,freq=2.0), product of:
                0.1770118 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050548375 = queryNorm
                0.46428138 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 14:17:22
  3. Cathey, R.J.; Jensen, E.C.; Beitzel, S.M.; Frieder, O.; Grossman, D.: Exploiting parallelism to support scalable hierarchical clustering (2007) 0.02
    0.018049862 = product of:
      0.036099724 = sum of:
        0.036099724 = product of:
          0.07219945 = sum of:
            0.07219945 = weight(_text_:p in 448) [ClassicSimilarity], result of:
              0.07219945 = score(doc=448,freq=8.0), product of:
                0.18174732 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.050548375 = queryNorm
                0.39725178 = fieldWeight in 448, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=448)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A distributed memory parallel version of the group average hierarchical agglomerative clustering algorithm is proposed to enable scaling the document clustering problem to large collections. Using standard message passing operations reduces interprocess communication while maintaining efficient load balancing. In a series of experiments using a subset of a standard Text REtrieval Conference (TREC) test collection, our parallel hierarchical clustering algorithm is shown to be scalable in terms of processors efficiently used and the collection size. Results show that our algorithm performs close to the expected O(n**2/p) time on p processors rather than the worst-case O(n**3/p) time. Furthermore, the O(n**2/p) memory complexity per node allows larger collections to be clustered as the number of nodes increases. While partitioning algorithms such as k-means are trivially parallelizable, our results confirm those of other studies which showed that hierarchical algorithms produce significantly tighter clusters in the document clustering task. Finally, we show how our parallel hierarchical agglomerative clustering algorithm can be used as the clustering subroutine for a parallel version of the buckshot algorithm to cluster the complete TREC collection at near theoretical runtime expectations.
  4. Reiner, U.: Automatische DDC-Klassifizierung von bibliografischen Titeldatensätzen (2009) 0.02
    0.017121518 = product of:
      0.034243036 = sum of:
        0.034243036 = product of:
          0.06848607 = sum of:
            0.06848607 = weight(_text_:22 in 611) [ClassicSimilarity], result of:
              0.06848607 = score(doc=611,freq=2.0), product of:
                0.1770118 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050548375 = queryNorm
                0.38690117 = fieldWeight in 611, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=611)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 8.2009 12:54:24
  5. HaCohen-Kerner, Y. et al.: Classification using various machine learning methods and combinations of key-phrases and visual features (2016) 0.02
    0.017121518 = product of:
      0.034243036 = sum of:
        0.034243036 = product of:
          0.06848607 = sum of:
            0.06848607 = weight(_text_:22 in 2748) [ClassicSimilarity], result of:
              0.06848607 = score(doc=2748,freq=2.0), product of:
                0.1770118 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050548375 = queryNorm
                0.38690117 = fieldWeight in 2748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2748)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  6. Golub, K.: Automated subject classification of textual web documents (2006) 0.02
    0.015763424 = product of:
      0.03152685 = sum of:
        0.03152685 = product of:
          0.1261074 = sum of:
            0.1261074 = weight(_text_:author's in 5600) [ClassicSimilarity], result of:
              0.1261074 = score(doc=5600,freq=2.0), product of:
                0.33969283 = queryWeight, product of:
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.050548375 = queryNorm
                0.3712395 = fieldWeight in 5600, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.7201533 = idf(docFreq=144, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5600)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To provide an integrated perspective to similarities and differences between approaches to automated classification in different research communities (machine learning, information retrieval and library science), and point to problems with the approaches and automated classification as such. Design/methodology/approach - A range of works dealing with automated classification of full-text web documents are discussed. Explorations of individual approaches are given in the following sections: special features (description, differences, evaluation), application and characteristics of web pages. Findings - Provides major similarities and differences between the three approaches: document pre-processing and utilization of web-specific document characteristics is common to all the approaches; major differences are in applied algorithms, employment or not of the vector space model and of controlled vocabularies. Problems of automated classification are recognized. Research limitations/implications - The paper does not attempt to provide an exhaustive bibliography of related resources. Practical implications - As an integrated overview of approaches from different research communities with application examples, it is very useful for students in library and information science and computer science, as well as for practitioners. Researchers from one community have the information on how similar tasks are conducted in different communities. Originality/value - To the author's knowledge, no review paper on automated text classification attempted to discuss more than one community's approach from an integrated perspective.
  7. Malo, P.; Sinha, A.; Wallenius, J.; Korhonen, P.: Concept-based document classification using Wikipedia and value function (2011) 0.02
    0.015315816 = product of:
      0.030631632 = sum of:
        0.030631632 = product of:
          0.061263263 = sum of:
            0.061263263 = weight(_text_:p in 4948) [ClassicSimilarity], result of:
              0.061263263 = score(doc=4948,freq=4.0), product of:
                0.18174732 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.050548375 = queryNorm
                0.33707932 = fieldWeight in 4948, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4948)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Bollmann, P.; Konrad, E.; Schneider, H.-J.; Zuse, H.: Anwendung automatischer Klassifikationsverfahren mit dem System FAKYR (1978) 0.01
    0.014439888 = product of:
      0.028879777 = sum of:
        0.028879777 = product of:
          0.057759553 = sum of:
            0.057759553 = weight(_text_:p in 82) [ClassicSimilarity], result of:
              0.057759553 = score(doc=82,freq=2.0), product of:
                0.18174732 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.050548375 = queryNorm
                0.31780142 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0625 = fieldNorm(doc=82)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Ingwersen, P.; Wormell, I.: Ranganathan in the perspective of advanced information retrieval (1992) 0.01
    0.014439888 = product of:
      0.028879777 = sum of:
        0.028879777 = product of:
          0.057759553 = sum of:
            0.057759553 = weight(_text_:p in 7695) [ClassicSimilarity], result of:
              0.057759553 = score(doc=7695,freq=2.0), product of:
                0.18174732 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.050548375 = queryNorm
                0.31780142 = fieldWeight in 7695, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0625 = fieldNorm(doc=7695)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Ruiz, M.E.; Srinivasan, P.: Combining machine learning and hierarchical indexing structures for text categorization (2001) 0.01
    0.012634902 = product of:
      0.025269805 = sum of:
        0.025269805 = product of:
          0.05053961 = sum of:
            0.05053961 = weight(_text_:p in 1595) [ClassicSimilarity], result of:
              0.05053961 = score(doc=1595,freq=2.0), product of:
                0.18174732 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.050548375 = queryNorm
                0.27807623 = fieldWeight in 1595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1595)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Bianchini, C.; Bargioni, S.: Automated classification using linked open data : a case study on faceted classification and Wikidata (2021) 0.01
    0.012634902 = product of:
      0.025269805 = sum of:
        0.025269805 = product of:
          0.05053961 = sum of:
            0.05053961 = weight(_text_:p in 724) [ClassicSimilarity], result of:
              0.05053961 = score(doc=724,freq=2.0), product of:
                0.18174732 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.050548375 = queryNorm
                0.27807623 = fieldWeight in 724, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=724)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Cataloging and classification quarterly. 59(2021) no.8, p.835-852
  12. Bock, H.-H.: Datenanalyse zur Strukturierung und Ordnung von Information (1989) 0.01
    0.011985063 = product of:
      0.023970125 = sum of:
        0.023970125 = product of:
          0.04794025 = sum of:
            0.04794025 = weight(_text_:22 in 141) [ClassicSimilarity], result of:
              0.04794025 = score(doc=141,freq=2.0), product of:
                0.1770118 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050548375 = queryNorm
                0.2708308 = fieldWeight in 141, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=141)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Pages
    S.1-22
  13. Dubin, D.: Dimensions and discriminability (1998) 0.01
    0.011985063 = product of:
      0.023970125 = sum of:
        0.023970125 = product of:
          0.04794025 = sum of:
            0.04794025 = weight(_text_:22 in 2338) [ClassicSimilarity], result of:
              0.04794025 = score(doc=2338,freq=2.0), product of:
                0.1770118 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050548375 = queryNorm
                0.2708308 = fieldWeight in 2338, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2338)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  14. Automatic classification research at OCLC (2002) 0.01
    0.011985063 = product of:
      0.023970125 = sum of:
        0.023970125 = product of:
          0.04794025 = sum of:
            0.04794025 = weight(_text_:22 in 1563) [ClassicSimilarity], result of:
              0.04794025 = score(doc=1563,freq=2.0), product of:
                0.1770118 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050548375 = queryNorm
                0.2708308 = fieldWeight in 1563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    5. 5.2003 9:22:09
  15. Jenkins, C.: Automatic classification of Web resources using Java and Dewey Decimal Classification (1998) 0.01
    0.011985063 = product of:
      0.023970125 = sum of:
        0.023970125 = product of:
          0.04794025 = sum of:
            0.04794025 = weight(_text_:22 in 1673) [ClassicSimilarity], result of:
              0.04794025 = score(doc=1673,freq=2.0), product of:
                0.1770118 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050548375 = queryNorm
                0.2708308 = fieldWeight in 1673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 8.1996 22:08:06
  16. Yoon, Y.; Lee, C.; Lee, G.G.: ¬An effective procedure for constructing a hierarchical text classification system (2006) 0.01
    0.011985063 = product of:
      0.023970125 = sum of:
        0.023970125 = product of:
          0.04794025 = sum of:
            0.04794025 = weight(_text_:22 in 5273) [ClassicSimilarity], result of:
              0.04794025 = score(doc=5273,freq=2.0), product of:
                0.1770118 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050548375 = queryNorm
                0.2708308 = fieldWeight in 5273, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5273)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:24:52
  17. Yi, K.: Automatic text classification using library classification schemes : trends, issues and challenges (2007) 0.01
    0.011985063 = product of:
      0.023970125 = sum of:
        0.023970125 = product of:
          0.04794025 = sum of:
            0.04794025 = weight(_text_:22 in 2560) [ClassicSimilarity], result of:
              0.04794025 = score(doc=2560,freq=2.0), product of:
                0.1770118 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050548375 = queryNorm
                0.2708308 = fieldWeight in 2560, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2560)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 9.2008 18:31:54
  18. Sun, A.; Lim, E.-P.; Ng, W.-K.: Performance measurement framework for hierarchical text classification (2003) 0.01
    0.010829916 = product of:
      0.021659832 = sum of:
        0.021659832 = product of:
          0.043319665 = sum of:
            0.043319665 = weight(_text_:p in 1808) [ClassicSimilarity], result of:
              0.043319665 = score(doc=1808,freq=2.0), product of:
                0.18174732 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.050548375 = queryNorm
                0.23835106 = fieldWeight in 1808, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1808)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Wu, K.J.; Chen, M.-C.; Sun, Y.: Automatic topics discovery from hyperlinked documents (2004) 0.01
    0.010829916 = product of:
      0.021659832 = sum of:
        0.021659832 = product of:
          0.043319665 = sum of:
            0.043319665 = weight(_text_:p in 2563) [ClassicSimilarity], result of:
              0.043319665 = score(doc=2563,freq=2.0), product of:
                0.18174732 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.050548375 = queryNorm
                0.23835106 = fieldWeight in 2563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Topic discovery is an important means for marketing, e-Business and social science studies. As well, it can be applied to various purposes, such as identifying a group with certain properties and observing the emergence and diminishment of a certain cyber community. Previous topic discovery work (J.M. Kleinberg, Proceedings of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, California, p. 668) requires manual judgment of usefulness of outcomes and is thus incapable of handling the explosive growth of the Internet. In this paper, we propose the Automatic Topic Discovery (ATD) method, which combines a method of base set construction, a clustering algorithm and an iterative principal eigenvector computation method to discover the topics relevant to a given query without using manual examination. Given a query, ATD returns with topics associated with the query and top representative pages for each topic. Our experiments show that the ATD method performs better than the traditional eigenvector method in terms of computation time and topic discovery quality.
  20. Prabowo, R.; Jackson, M.; Burden, P.; Knoell, H.-D.: Ontology-based automatic classification for the Web pages : design, implementation and evaluation (2002) 0.01
    0.010829916 = product of:
      0.021659832 = sum of:
        0.021659832 = product of:
          0.043319665 = sum of:
            0.043319665 = weight(_text_:p in 3383) [ClassicSimilarity], result of:
              0.043319665 = score(doc=3383,freq=2.0), product of:
                0.18174732 = queryWeight, product of:
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.050548375 = queryNorm
                0.23835106 = fieldWeight in 3383, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5955126 = idf(docFreq=3298, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3383)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Languages

  • e 30
  • d 5

Types