Search (61 results, page 1 of 4)

  • × theme_ss:"Data Mining"
  1. Witten, I.H.; Frank, E.: Data Mining : Praktische Werkzeuge und Techniken für das maschinelle Lernen (2000) 0.02
    0.023265064 = product of:
      0.034897596 = sum of:
        0.014935959 = product of:
          0.029871918 = sum of:
            0.029871918 = weight(_text_:h in 6833) [ClassicSimilarity], result of:
              0.029871918 = score(doc=6833,freq=2.0), product of:
                0.09068736 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.036501996 = queryNorm
                0.32939452 = fieldWeight in 6833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6833)
          0.5 = coord(1/2)
        0.019961635 = product of:
          0.059884902 = sum of:
            0.059884902 = weight(_text_:29 in 6833) [ClassicSimilarity], result of:
              0.059884902 = score(doc=6833,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.46638384 = fieldWeight in 6833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6833)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    27. 1.1996 10:29:55
    Footnote
    Rez. in: nfd 52(2001), H.7, S.427-428 (T. Mandl)
  2. Keim, D.A.: Data Mining mit bloßem Auge (2002) 0.02
    0.023265064 = product of:
      0.034897596 = sum of:
        0.014935959 = product of:
          0.029871918 = sum of:
            0.029871918 = weight(_text_:h in 1086) [ClassicSimilarity], result of:
              0.029871918 = score(doc=1086,freq=2.0), product of:
                0.09068736 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.036501996 = queryNorm
                0.32939452 = fieldWeight in 1086, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1086)
          0.5 = coord(1/2)
        0.019961635 = product of:
          0.059884902 = sum of:
            0.059884902 = weight(_text_:29 in 1086) [ClassicSimilarity], result of:
              0.059884902 = score(doc=1086,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.46638384 = fieldWeight in 1086, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1086)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    31.12.1996 19:29:41
    Source
    Spektrum der Wissenschaft. 2002, H.11, S.88-91
  3. Kruse, R.; Borgelt, C.: Suche im Datendschungel (2002) 0.02
    0.023265064 = product of:
      0.034897596 = sum of:
        0.014935959 = product of:
          0.029871918 = sum of:
            0.029871918 = weight(_text_:h in 1087) [ClassicSimilarity], result of:
              0.029871918 = score(doc=1087,freq=2.0), product of:
                0.09068736 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.036501996 = queryNorm
                0.32939452 = fieldWeight in 1087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1087)
          0.5 = coord(1/2)
        0.019961635 = product of:
          0.059884902 = sum of:
            0.059884902 = weight(_text_:29 in 1087) [ClassicSimilarity], result of:
              0.059884902 = score(doc=1087,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.46638384 = fieldWeight in 1087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1087)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    31.12.1996 19:29:41
    Source
    Spektrum der Wissenschaft. 2002, H.11, S.80-81
  4. Wrobel, S.: Lern- und Entdeckungsverfahren (2002) 0.02
    0.023265064 = product of:
      0.034897596 = sum of:
        0.014935959 = product of:
          0.029871918 = sum of:
            0.029871918 = weight(_text_:h in 1105) [ClassicSimilarity], result of:
              0.029871918 = score(doc=1105,freq=2.0), product of:
                0.09068736 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.036501996 = queryNorm
                0.32939452 = fieldWeight in 1105, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1105)
          0.5 = coord(1/2)
        0.019961635 = product of:
          0.059884902 = sum of:
            0.059884902 = weight(_text_:29 in 1105) [ClassicSimilarity], result of:
              0.059884902 = score(doc=1105,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.46638384 = fieldWeight in 1105, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1105)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    31.12.1996 19:29:41
    Source
    Spektrum der Wissenschaft. 2002, H.11, S.85-87
  5. Borgelt, C.; Kruse, R.: Unsicheres Wissen nutzen (2002) 0.02
    0.019387554 = product of:
      0.02908133 = sum of:
        0.012446634 = product of:
          0.024893267 = sum of:
            0.024893267 = weight(_text_:h in 1104) [ClassicSimilarity], result of:
              0.024893267 = score(doc=1104,freq=2.0), product of:
                0.09068736 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.036501996 = queryNorm
                0.27449545 = fieldWeight in 1104, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1104)
          0.5 = coord(1/2)
        0.016634695 = product of:
          0.049904086 = sum of:
            0.049904086 = weight(_text_:29 in 1104) [ClassicSimilarity], result of:
              0.049904086 = score(doc=1104,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.38865322 = fieldWeight in 1104, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1104)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    31.12.1996 19:29:41
    Source
    Spektrum der Wissenschaft. 2002, H.11, S.82-84
  6. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.02
    0.017663866 = product of:
      0.0529916 = sum of:
        0.0529916 = product of:
          0.0794874 = sum of:
            0.039923266 = weight(_text_:29 in 1270) [ClassicSimilarity], result of:
              0.039923266 = score(doc=1270,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.31092256 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
            0.039564133 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.039564133 = score(doc=1270,freq=2.0), product of:
                0.12782377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036501996 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    5. 4.1996 15:29:15
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  7. Tiefschürfen in Datenbanken (2002) 0.02
    0.015510043 = product of:
      0.023265064 = sum of:
        0.009957307 = product of:
          0.019914614 = sum of:
            0.019914614 = weight(_text_:h in 996) [ClassicSimilarity], result of:
              0.019914614 = score(doc=996,freq=2.0), product of:
                0.09068736 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.036501996 = queryNorm
                0.21959636 = fieldWeight in 996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=996)
          0.5 = coord(1/2)
        0.013307756 = product of:
          0.039923266 = sum of:
            0.039923266 = weight(_text_:29 in 996) [ClassicSimilarity], result of:
              0.039923266 = score(doc=996,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.31092256 = fieldWeight in 996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=996)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Date
    31.12.1996 19:29:41
    Source
    Spektrum der Wissenschaft. 2002, H.11, S.80-91
  8. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.02
    0.015455885 = product of:
      0.046367653 = sum of:
        0.046367653 = product of:
          0.069551475 = sum of:
            0.03493286 = weight(_text_:29 in 2908) [ClassicSimilarity], result of:
              0.03493286 = score(doc=2908,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.27205724 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
            0.034618616 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.034618616 = score(doc=2908,freq=2.0), product of:
                0.12782377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036501996 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.6666667 = coord(2/3)
      0.33333334 = coord(1/3)
    
    Date
    5. 4.1996 15:29:15
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  9. Ma, Z.; Sun, A.; Cong, G.: On predicting the popularity of newly emerging hashtags in Twitter (2013) 0.01
    0.0141104 = product of:
      0.0211656 = sum of:
        0.012848252 = product of:
          0.025696505 = sum of:
            0.025696505 = weight(_text_:k in 967) [ClassicSimilarity], result of:
              0.025696505 = score(doc=967,freq=2.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.19720423 = fieldWeight in 967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=967)
          0.5 = coord(1/2)
        0.008317348 = product of:
          0.024952043 = sum of:
            0.024952043 = weight(_text_:29 in 967) [ClassicSimilarity], result of:
              0.024952043 = score(doc=967,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.19432661 = fieldWeight in 967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=967)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    Because of Twitter's popularity and the viral nature of information dissemination on Twitter, predicting which Twitter topics will become popular in the near future becomes a task of considerable economic importance. Many Twitter topics are annotated by hashtags. In this article, we propose methods to predict the popularity of new hashtags on Twitter by formulating the problem as a classification task. We use five standard classification models (i.e., Naïve bayes, k-nearest neighbors, decision trees, support vector machines, and logistic regression) for prediction. The main challenge is the identification of effective features for describing new hashtags. We extract 7 content features from a hashtag string and the collection of tweets containing the hashtag and 11 contextual features from the social graph formed by users who have adopted the hashtag. We conducted experiments on a Twitter data set consisting of 31 million tweets from 2 million Singapore-based users. The experimental results show that the standard classifiers using the extracted features significantly outperform the baseline methods that do not use these features. Among the five classifiers, the logistic regression model performs the best in terms of the Micro-F1 measure. We also observe that contextual features are more effective than content features.
    Date
    25. 6.2013 19:05:29
  10. Tu, Y.-N.; Hsu, S.-L.: Constructing conceptual trajectory maps to trace the development of research fields (2016) 0.01
    0.011412298 = product of:
      0.017118447 = sum of:
        0.008801099 = product of:
          0.017602198 = sum of:
            0.017602198 = weight(_text_:h in 3059) [ClassicSimilarity], result of:
              0.017602198 = score(doc=3059,freq=4.0), product of:
                0.09068736 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.036501996 = queryNorm
                0.1940976 = fieldWeight in 3059, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.5 = coord(1/2)
        0.008317348 = product of:
          0.024952043 = sum of:
            0.024952043 = weight(_text_:29 in 3059) [ClassicSimilarity], result of:
              0.024952043 = score(doc=3059,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.19432661 = fieldWeight in 3059, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Abstract
    This study proposes a new method to construct and trace the trajectory of conceptual development of a research field by combining main path analysis, citation analysis, and text-mining techniques. Main path analysis, a method used commonly to trace the most critical path in a citation network, helps describe the developmental trajectory of a research field. This study extends the main path analysis method and applies text-mining techniques in the new method, which reflects the trajectory of conceptual development in an academic research field more accurately than citation frequency, which represents only the articles examined. Articles can be merged based on similarity of concepts, and by merging concepts the history of a research field can be described more precisely. The new method was applied to the "h-index" and "text mining" fields. The precision, recall, and F-measures of the h-index were 0.738, 0.652, and 0.658 and those of text-mining were 0.501, 0.653, and 0.551, respectively. Last, this study not only establishes the conceptual trajectory map of a research field, but also recommends keywords that are more precise than those used currently by researchers. These precise keywords could enable researchers to gather related works more quickly than before.
    Date
    21. 7.2016 19:29:19
  11. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.01
    0.01056508 = product of:
      0.01584762 = sum of:
        0.010902103 = product of:
          0.021804206 = sum of:
            0.021804206 = weight(_text_:k in 1833) [ClassicSimilarity], result of:
              0.021804206 = score(doc=1833,freq=4.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.16733333 = fieldWeight in 1833, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1833)
          0.5 = coord(1/2)
        0.0049455166 = product of:
          0.014836549 = sum of:
            0.014836549 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
              0.014836549 = score(doc=1833,freq=2.0), product of:
                0.12782377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036501996 = queryNorm
                0.116070345 = fieldWeight in 1833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1833)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Classification
    BAQC (FH K)
    Date
    11. 5.2008 19:49:22
    GHBS
    BAQC (FH K)
  12. Lischka, K.: Spurensuche im Datenwust : Data-Mining-Software fahndet nach kriminellen Mitarbeitern, guten Kunden - und bald vielleicht auch nach Terroristen (2002) 0.01
    0.008436313 = product of:
      0.012654468 = sum of:
        0.007708952 = product of:
          0.015417904 = sum of:
            0.015417904 = weight(_text_:k in 1178) [ClassicSimilarity], result of:
              0.015417904 = score(doc=1178,freq=2.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.118322544 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
        0.0049455166 = product of:
          0.014836549 = sum of:
            0.014836549 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
              0.014836549 = score(doc=1178,freq=2.0), product of:
                0.12782377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036501996 = queryNorm
                0.116070345 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1178)
          0.33333334 = coord(1/3)
      0.6666667 = coord(2/3)
    
    Content
    "Ob man als Terrorist einen Anschlag gegen die Vereinigten Staaten plant, als Kassierer Scheine aus der Kasse unterschlägt oder für bestimmte Produkte besonders gerne Geld ausgibt - einen Unterschied macht Data-Mining-Software da nicht. Solche Programme analysieren riesige Daten- mengen und fällen statistische Urteile. Mit diesen Methoden wollen nun die For- scher des "Information Awaren in den Vereinigten Staaten Spuren von Terroristen in den Datenbanken von Behörden und privaten Unternehmen wie Kreditkartenfirmen finden. 200 Millionen Dollar umfasst der Jahresetat für die verschiedenen Forschungsprojekte. Dass solche Software in der Praxis funktioniert, zeigen die steigenden Umsätze der Anbieter so genannter Customer-Relationship-Management-Software. Im vergangenen Jahr ist das Potenzial für analytische CRM-Anwendungen laut dem Marktforschungsinstitut IDC weltweit um 22 Prozent gewachsen, bis zum Jahr 2006 soll es in Deutschland mit einem jährlichen Plus von 14,1 Prozent so weitergehen. Und das trotz schwacher Konjunktur - oder gerade deswegen. Denn ähnlich wie Data-Mining der USRegierung helfen soll, Terroristen zu finden, entscheiden CRM-Programme heute, welche Kunden für eine Firma profitabel sind. Und welche es künftig sein werden, wie Manuela Schnaubelt, Sprecherin des CRM-Anbieters SAP, beschreibt: "Die Kundenbewertung ist ein zentraler Bestandteil des analytischen CRM. Sie ermöglicht es Unternehmen, sich auf die für sie wichtigen und richtigen Kunden zu fokussieren. Darüber hinaus können Firmen mit speziellen Scoring- Verfahren ermitteln, welche Kunden langfristig in welchem Maße zum Unternehmenserfolg beitragen." Die Folgen der Bewertungen sind für die Betroffenen nicht immer positiv: Attraktive Kunden profitieren von individuellen Sonderangeboten und besonderer Zuwendung. Andere hängen vielleicht so lauge in der Warteschleife des Telefonservice, bis die profitableren Kunden abgearbeitet sind. So könnte eine praktische Umsetzung dessen aussehen, was SAP-Spreche-rin Schnaubelt abstrakt beschreibt: "In vielen Unternehmen wird Kundenbewertung mit der klassischen ABC-Analyse durchgeführt, bei der Kunden anhand von Daten wie dem Umsatz kategorisiert werden. A-Kunden als besonders wichtige Kunden werden anders betreut als C-Kunden." Noch näher am geplanten Einsatz von Data-Mining zur Terroristenjagd ist eine Anwendung, die heute viele Firmen erfolgreich nutzen: Sie spüren betrügende Mitarbeiter auf. Werner Sülzer vom großen CRM-Anbieter NCR Teradata beschreibt die Möglichkeiten so: "Heute hinterlässt praktisch jeder Täter - ob Mitarbeiter, Kunde oder Lieferant - Datenspuren bei seinen wirtschaftskriminellen Handlungen. Es muss vorrangig darum gehen, einzelne Spuren zu Handlungsmustern und Täterprofilen zu verdichten. Das gelingt mittels zentraler Datenlager und hoch entwickelter Such- und Analyseinstrumente." Von konkreten Erfolgen sprich: Entlas-sungen krimineller Mitarbeiter-nach Einsatz solcher Programme erzählen Unternehmen nicht gerne. Matthias Wilke von der "Beratungsstelle für Technologiefolgen und Qualifizierung" (BTQ) der Gewerkschaft Verdi weiß von einem Fall 'aus der Schweiz. Dort setzt die Handelskette "Pick Pay" das Programm "Lord Lose Prevention" ein. Zwei Monate nach Einfüh-rung seien Unterschlagungen im Wert von etwa 200 000 Franken ermittelt worden. Das kostete mehr als 50 verdächtige Kassiererinnen und Kassierer den Job.
  13. Budzik, J.; Hammond, K.J.; Birnbaum, L.: Information access in context (2001) 0.01
    0.007762858 = product of:
      0.023288574 = sum of:
        0.023288574 = product of:
          0.06986572 = sum of:
            0.06986572 = weight(_text_:29 in 3835) [ClassicSimilarity], result of:
              0.06986572 = score(doc=3835,freq=2.0), product of:
                0.1284026 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036501996 = queryNorm
                0.5441145 = fieldWeight in 3835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3835)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    29. 3.2002 17:31:17
  14. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.01
    0.007693026 = product of:
      0.023079079 = sum of:
        0.023079079 = product of:
          0.06923723 = sum of:
            0.06923723 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.06923723 = score(doc=4577,freq=2.0), product of:
                0.12782377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036501996 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Date
    2. 4.2000 18:01:22
  15. KDD : techniques and applications (1998) 0.01
    0.0065940223 = product of:
      0.019782066 = sum of:
        0.019782066 = product of:
          0.059346195 = sum of:
            0.059346195 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
              0.059346195 = score(doc=6783,freq=2.0), product of:
                0.12782377 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036501996 = queryNorm
                0.46428138 = fieldWeight in 6783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6783)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  16. Schwartz, D.: Graphische Datenanalyse für digitale Bibliotheken : Leistungs- und Funktionsumfang moderner Analyse- und Visualisierungsinstrumente (2006) 0.01
    0.005995852 = product of:
      0.017987555 = sum of:
        0.017987555 = product of:
          0.03597511 = sum of:
            0.03597511 = weight(_text_:k in 30) [ClassicSimilarity], result of:
              0.03597511 = score(doc=30,freq=2.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.27608594 = fieldWeight in 30, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=30)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Vom Wandel der Wissensorganisation im Informationszeitalter: Festschrift für Walther Umstätter zum 65. Geburtstag, hrsg. von P. Hauke u. K. Umlauf
  17. Handbuch Web Mining im Marketing : Konzepte, Systeme, Fallstudien (2002) 0.01
    0.005808429 = product of:
      0.017425286 = sum of:
        0.017425286 = product of:
          0.03485057 = sum of:
            0.03485057 = weight(_text_:h in 6106) [ClassicSimilarity], result of:
              0.03485057 = score(doc=6106,freq=2.0), product of:
                0.09068736 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.036501996 = queryNorm
                0.38429362 = fieldWeight in 6106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6106)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Editor
    Hippner, H. et al.
  18. Schmid, J.: Data mining : wie finde ich in Datensammlungen entscheidungsrelevante Muster? (1999) 0.01
    0.005808429 = product of:
      0.017425286 = sum of:
        0.017425286 = product of:
          0.03485057 = sum of:
            0.03485057 = weight(_text_:h in 4540) [ClassicSimilarity], result of:
              0.03485057 = score(doc=4540,freq=2.0), product of:
                0.09068736 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.036501996 = queryNorm
                0.38429362 = fieldWeight in 4540, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4540)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    ARBIDO. 14(1999) H.5, S.11-13
  19. Heyer, G.; Läuter, M.; Quasthoff, U.; Wolff, C.: Texttechnologische Anwendungen am Beispiel Text Mining (2000) 0.01
    0.0051393015 = product of:
      0.015417904 = sum of:
        0.015417904 = product of:
          0.030835807 = sum of:
            0.030835807 = weight(_text_:k in 5565) [ClassicSimilarity], result of:
              0.030835807 = score(doc=5565,freq=2.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.23664509 = fieldWeight in 5565, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5565)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia: Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000, Fachhochschule Köln. Hrsg.: K.-D. Schmitz
  20. Ohly, H.P.: Bibliometric mining : added value from document analysis and retrieval (2008) 0.01
    0.0051393015 = product of:
      0.015417904 = sum of:
        0.015417904 = product of:
          0.030835807 = sum of:
            0.030835807 = weight(_text_:k in 2386) [ClassicSimilarity], result of:
              0.030835807 = score(doc=2386,freq=2.0), product of:
                0.13030402 = queryWeight, product of:
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.036501996 = queryNorm
                0.23664509 = fieldWeight in 2386, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.569778 = idf(docFreq=3384, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2386)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Kompatibilität, Medien und Ethik in der Wissensorganisation - Compatibility, Media and Ethics in Knowledge Organization: Proceedings der 10. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Wien, 3.-5. Juli 2006 - Proceedings of the 10th Conference of the German Section of the International Society of Knowledge Organization Vienna, 3-5 July 2006. Ed.: H.P. Ohly, S. Netscher u. K. Mitgutsch

Years

Languages

  • e 38
  • d 23

Types