Search (7 results, page 1 of 1)

  • × theme_ss:"Automatisches Klassifizieren"
  • × theme_ss:"Data Mining"
  1. Ma, Z.; Sun, A.; Cong, G.: On predicting the popularity of newly emerging hashtags in Twitter (2013) 0.01
    0.011655438 = product of:
      0.023310876 = sum of:
        0.023310876 = product of:
          0.034966312 = sum of:
            0.031998128 = weight(_text_:k in 967) [ClassicSimilarity], result of:
              0.031998128 = score(doc=967,freq=2.0), product of:
                0.16225883 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.04545348 = queryNorm
                0.19720423 = fieldWeight in 967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=967)
            0.0029681858 = weight(_text_:s in 967) [ClassicSimilarity], result of:
              0.0029681858 = score(doc=967,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.060061958 = fieldWeight in 967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=967)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Abstract
    Because of Twitter's popularity and the viral nature of information dissemination on Twitter, predicting which Twitter topics will become popular in the near future becomes a task of considerable economic importance. Many Twitter topics are annotated by hashtags. In this article, we propose methods to predict the popularity of new hashtags on Twitter by formulating the problem as a classification task. We use five standard classification models (i.e., Naïve bayes, k-nearest neighbors, decision trees, support vector machines, and logistic regression) for prediction. The main challenge is the identification of effective features for describing new hashtags. We extract 7 content features from a hashtag string and the collection of tweets containing the hashtag and 11 contextual features from the social graph formed by users who have adopted the hashtag. We conducted experiments on a Twitter data set consisting of 31 million tweets from 2 million Singapore-based users. The experimental results show that the standard classifiers using the extracted features significantly outperform the baseline methods that do not use these features. Among the five classifiers, the logistic regression model performs the best in terms of the Micro-F1 measure. We also observe that contextual features are more effective than content features.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.7, S.1399-1410
  2. Fong, A.C.M.: Mining a Web citation database for document clustering (2002) 0.00
    0.0013851533 = product of:
      0.0027703065 = sum of:
        0.0027703065 = product of:
          0.00831092 = sum of:
            0.00831092 = weight(_text_:s in 3940) [ClassicSimilarity], result of:
              0.00831092 = score(doc=3940,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.16817348 = fieldWeight in 3940, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3940)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Applied artificial intelligence. 16(2002) no.4, S.283-292
  3. Liu, X.; Yu, S.; Janssens, F.; Glänzel, W.; Moreau, Y.; Moor, B.de: Weighted hybrid clustering by combining text mining and bibliometrics on a large-scale journal database (2010) 0.00
    8.3952973E-4 = product of:
      0.0016790595 = sum of:
        0.0016790595 = product of:
          0.0050371783 = sum of:
            0.0050371783 = weight(_text_:s in 3464) [ClassicSimilarity], result of:
              0.0050371783 = score(doc=3464,freq=4.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.101928525 = fieldWeight in 3464, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.6, S.1105-1119
  4. Teich, E.; Degaetano-Ortlieb, S.; Fankhauser, P.; Kermes, H.; Lapshinova-Koltunski, E.: ¬The linguistic construal of disciplinarity : a data-mining approach using register features (2016) 0.00
    8.3952973E-4 = product of:
      0.0016790595 = sum of:
        0.0016790595 = product of:
          0.0050371783 = sum of:
            0.0050371783 = weight(_text_:s in 3015) [ClassicSimilarity], result of:
              0.0050371783 = score(doc=3015,freq=4.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.101928525 = fieldWeight in 3015, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3015)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1668-1678
  5. Brückner, T.; Dambeck, H.: Sortierautomaten : Grundlagen der Textklassifizierung (2003) 0.00
    7.915163E-4 = product of:
      0.0015830325 = sum of:
        0.0015830325 = product of:
          0.0047490974 = sum of:
            0.0047490974 = weight(_text_:s in 2398) [ClassicSimilarity], result of:
              0.0047490974 = score(doc=2398,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.09609913 = fieldWeight in 2398, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2398)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    c't. 2003, H.19, S.192-197
  6. Classification, automation, and new media : Proceedings of the 24th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Passau, March 15 - 17, 2000 (2002) 0.00
    6.996081E-4 = product of:
      0.0013992162 = sum of:
        0.0013992162 = product of:
          0.0041976487 = sum of:
            0.0041976487 = weight(_text_:s in 5997) [ClassicSimilarity], result of:
              0.0041976487 = score(doc=5997,freq=4.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.08494043 = fieldWeight in 5997, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5997)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Pages
    XI, 535 S
    Type
    s
  7. Wu, K.J.; Chen, M.-C.; Sun, Y.: Automatic topics discovery from hyperlinked documents (2004) 0.00
    5.936372E-4 = product of:
      0.0011872743 = sum of:
        0.0011872743 = product of:
          0.003561823 = sum of:
            0.003561823 = weight(_text_:s in 2563) [ClassicSimilarity], result of:
              0.003561823 = score(doc=2563,freq=2.0), product of:
                0.049418733 = queryWeight, product of:
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.04545348 = queryNorm
                0.072074346 = fieldWeight in 2563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.0872376 = idf(docFreq=40523, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2563)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 40(2004) no.2, S.239-255