Search (9 results, page 1 of 1)

  • × theme_ss:"Data Mining"
  • × year_i:[2010 TO 2020}
  1. Bauckhage, C.: Moderne Textanalyse : neues Wissen für intelligente Lösungen (2016) 0.01
    0.013402127 = product of:
      0.053608507 = sum of:
        0.053608507 = product of:
          0.107217014 = sum of:
            0.107217014 = weight(_text_:intelligenz in 2568) [ClassicSimilarity], result of:
              0.107217014 = score(doc=2568,freq=2.0), product of:
                0.21362439 = queryWeight, product of:
                  5.678294 = idf(docFreq=410, maxDocs=44218)
                  0.037621226 = queryNorm
                0.501895 = fieldWeight in 2568, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.678294 = idf(docFreq=410, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2568)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Im Zuge der immer größeren Verfügbarkeit von Daten (Big Data) und rasanter Fortschritte im Daten-basierten maschinellen Lernen haben wir in den letzten Jahren Durchbrüche in der künstlichen Intelligenz erlebt. Dieser Vortrag beleuchtet diese Entwicklungen insbesondere im Hinblick auf die automatische Analyse von Textdaten. Anhand einfacher Beispiele illustrieren wir, wie moderne Textanalyse abläuft und zeigen wiederum anhand von Beispielen, welche praktischen Anwendungsmöglichkeiten sich heutzutage in Branchen wie dem Verlagswesen, der Finanzindustrie oder dem Consulting ergeben.
  2. Loonus, Y.: Einsatzbereiche der KI und ihre Relevanz für Information Professionals (2017) 0.01
    0.010051595 = product of:
      0.04020638 = sum of:
        0.04020638 = product of:
          0.08041276 = sum of:
            0.08041276 = weight(_text_:intelligenz in 5668) [ClassicSimilarity], result of:
              0.08041276 = score(doc=5668,freq=2.0), product of:
                0.21362439 = queryWeight, product of:
                  5.678294 = idf(docFreq=410, maxDocs=44218)
                  0.037621226 = queryNorm
                0.37642127 = fieldWeight in 5668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.678294 = idf(docFreq=410, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5668)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Series
    Künstliche Intelligenz
  3. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.00
    0.003185723 = product of:
      0.012742892 = sum of:
        0.012742892 = product of:
          0.025485784 = sum of:
            0.025485784 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
              0.025485784 = score(doc=668,freq=2.0), product of:
                0.13174312 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037621226 = queryNorm
                0.19345059 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2013 19:43:01
  4. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.00
    0.003185723 = product of:
      0.012742892 = sum of:
        0.012742892 = product of:
          0.025485784 = sum of:
            0.025485784 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.025485784 = score(doc=1605,freq=2.0), product of:
                0.13174312 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037621226 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  5. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.00
    0.003185723 = product of:
      0.012742892 = sum of:
        0.012742892 = product of:
          0.025485784 = sum of:
            0.025485784 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
              0.025485784 = score(doc=5011,freq=2.0), product of:
                0.13174312 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037621226 = queryNorm
                0.19345059 = fieldWeight in 5011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5011)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    7. 3.2019 16:32:22
  6. Song, J.; Huang, Y.; Qi, X.; Li, Y.; Li, F.; Fu, K.; Huang, T.: Discovering hierarchical topic evolution in time-stamped documents (2016) 0.00
    0.0026484418 = product of:
      0.010593767 = sum of:
        0.010593767 = product of:
          0.0317813 = sum of:
            0.0317813 = weight(_text_:k in 2853) [ClassicSimilarity], result of:
              0.0317813 = score(doc=2853,freq=2.0), product of:
                0.13429943 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.037621226 = queryNorm
                0.23664509 = fieldWeight in 2853, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2853)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
  7. Jäger, L.: Von Big Data zu Big Brother (2018) 0.00
    0.0025485782 = product of:
      0.010194313 = sum of:
        0.010194313 = product of:
          0.020388626 = sum of:
            0.020388626 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
              0.020388626 = score(doc=5234,freq=2.0), product of:
                0.13174312 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037621226 = queryNorm
                0.15476047 = fieldWeight in 5234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5234)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 1.2018 11:33:49
  8. Liu, B.: Web data mining : exploring hyperlinks, contents, and usage data (2011) 0.00
    0.0024969748 = product of:
      0.009987899 = sum of:
        0.009987899 = product of:
          0.029963696 = sum of:
            0.029963696 = weight(_text_:k in 354) [ClassicSimilarity], result of:
              0.029963696 = score(doc=354,freq=4.0), product of:
                0.13429943 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.037621226 = queryNorm
                0.22311112 = fieldWeight in 354, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.03125 = fieldNorm(doc=354)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Classification
    TZG (FH K)
    GHBS
    TZG (FH K)
  9. Ma, Z.; Sun, A.; Cong, G.: On predicting the popularity of newly emerging hashtags in Twitter (2013) 0.00
    0.0022070347 = product of:
      0.008828139 = sum of:
        0.008828139 = product of:
          0.026484415 = sum of:
            0.026484415 = weight(_text_:k in 967) [ClassicSimilarity], result of:
              0.026484415 = score(doc=967,freq=2.0), product of:
                0.13429943 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.037621226 = queryNorm
                0.19720423 = fieldWeight in 967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=967)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Abstract
    Because of Twitter's popularity and the viral nature of information dissemination on Twitter, predicting which Twitter topics will become popular in the near future becomes a task of considerable economic importance. Many Twitter topics are annotated by hashtags. In this article, we propose methods to predict the popularity of new hashtags on Twitter by formulating the problem as a classification task. We use five standard classification models (i.e., Naïve bayes, k-nearest neighbors, decision trees, support vector machines, and logistic regression) for prediction. The main challenge is the identification of effective features for describing new hashtags. We extract 7 content features from a hashtag string and the collection of tweets containing the hashtag and 11 contextual features from the social graph formed by users who have adopted the hashtag. We conducted experiments on a Twitter data set consisting of 31 million tweets from 2 million Singapore-based users. The experimental results show that the standard classifiers using the extracted features significantly outperform the baseline methods that do not use these features. Among the five classifiers, the logistic regression model performs the best in terms of the Micro-F1 measure. We also observe that contextual features are more effective than content features.