Search (22 results, page 1 of 2)

  • × theme_ss:"Data Mining"
  1. Raan, A.F.J. van; Noyons, E.C.M.: Discovery of patterns of scientific and technological development and knowledge transfer (2002) 0.02
    0.015255922 = product of:
      0.045767765 = sum of:
        0.045767765 = product of:
          0.09153553 = sum of:
            0.09153553 = weight(_text_:methodology in 3603) [ClassicSimilarity], result of:
              0.09153553 = score(doc=3603,freq=6.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.43102458 = fieldWeight in 3603, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3603)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper addresses a bibliometric methodology to discover the structure of the scientific 'landscape' in order to gain detailed insight into the development of MD fields, their interaction, and the transfer of knowledge between them. This methodology is appropriate to visualize the position of MD activities in relation to interdisciplinary MD developments, and particularly in relation to socio-economic problems. Furthermore, it allows the identification of the major actors. It even provides the possibility of foresight. We describe a first approach to apply bibliometric mapping as an instrument to investigate characteristics of knowledge transfer. In this paper we discuss the creation of 'maps of science' with help of advanced bibliometric methods. This 'bibliometric cartography' can be seen as a specific type of data-mining, applied to large amounts of scientific publications. As an example we describe the mapping of the field neuroscience, one of the largest and fast growing fields in the life sciences. The number of publications covered by this database is about 80,000 per year, the period covered is 1995-1998. Current research is going an to update the mapping for the years 1999-2002. This paper addresses the main lines of the methodology and its application in the study of knowledge transfer.
  2. Mohr, J.W.; Bogdanov, P.: Topic models : what they are and why they matter (2013) 0.01
    0.014947688 = product of:
      0.044843063 = sum of:
        0.044843063 = product of:
          0.089686126 = sum of:
            0.089686126 = weight(_text_:methodology in 1142) [ClassicSimilarity], result of:
              0.089686126 = score(doc=1142,freq=4.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.42231607 = fieldWeight in 1142, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1142)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We provide a brief, non-technical introduction to the text mining methodology known as "topic modeling." We summarize the theory and background of the method and discuss what kinds of things are found by topic models. Using a text corpus comprised of the eight articles from the special issue of Poetics on the subject of topic models, we run a topic model on these articles, both as a way to introduce the methodology and also to help summarize some of the ways in which social and cultural scientists are using topic models. We review some of the critiques and debates over the use of the method and finally, we link these developments back to some of the original innovations in the field of content analysis that were pioneered by Harold D. Lasswell and colleagues during and just after World War II.
  3. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.01
    0.014903667 = product of:
      0.044711 = sum of:
        0.044711 = product of:
          0.089422 = sum of:
            0.089422 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.089422 = score(doc=4577,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    2. 4.2000 18:01:22
  4. KDD : techniques and applications (1998) 0.01
    0.012774572 = product of:
      0.038323715 = sum of:
        0.038323715 = product of:
          0.07664743 = sum of:
            0.07664743 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
              0.07664743 = score(doc=6783,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.46428138 = fieldWeight in 6783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6783)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  5. Deogun, J.S.: Feature selection and effective classifiers (1998) 0.01
    0.010569612 = product of:
      0.031708833 = sum of:
        0.031708833 = product of:
          0.063417666 = sum of:
            0.063417666 = weight(_text_:methodology in 2911) [ClassicSimilarity], result of:
              0.063417666 = score(doc=2911,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.29862255 = fieldWeight in 2911, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2911)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Develops and analyzes 4 algorithms for feature selection in the context of rough set methodology. Develops the notion of accuracy of classification that can be used for upper or lower classification methods and defines the feature selection problem. Presents a discussion of upper classifiers and develops 4 features selection heuristics and discusses the family of stepwise backward selection algorithms. Analyzes the worst case time complexity in all algorithms presented. Discusses details of the experiments and results of using a family of stepwise backward selection learning data sets and a duodenal ulcer data set. Includes the experimental setup and results of comparison of lower classifiers and upper classiers on the duodenal ulcer data set. Discusses exteded decision tables
  6. Sánchez, D.; Chamorro-Martínez, J.; Vila, M.A.: Modelling subjectivity in visual perception of orientation for image retrieval (2003) 0.01
    0.010569612 = product of:
      0.031708833 = sum of:
        0.031708833 = product of:
          0.063417666 = sum of:
            0.063417666 = weight(_text_:methodology in 1067) [ClassicSimilarity], result of:
              0.063417666 = score(doc=1067,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.29862255 = fieldWeight in 1067, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1067)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this paper we combine computer vision and data mining techniques to model high-level concepts for image retrieval, on the basis of basic perceptual features of the human visual system. High-level concepts related to these features are learned and represented by means of a set of fuzzy association rules. The concepts so acquired can be used for image retrieval with the advantage that it is not needed to provide an image as a query. Instead, a query is formulated by using the labels that identify the learned concepts as search terms, and the retrieval process calculates the relevance of an image to the query by an inference mechanism. An additional feature of our methodology is that it can capture user's subjectivity. For that purpose, fuzzy sets theory is employed to measure user's assessments about the fulfillment of a concept by an image.
  7. Dang, X.H.; Ong. K.-L.: Knowledge discovery in data streams (2009) 0.01
    0.010569612 = product of:
      0.031708833 = sum of:
        0.031708833 = product of:
          0.063417666 = sum of:
            0.063417666 = weight(_text_:methodology in 3829) [ClassicSimilarity], result of:
              0.063417666 = score(doc=3829,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.29862255 = fieldWeight in 3829, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3829)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowing what to do with the massive amount of data collected has always been an ongoing issue for many organizations. While data mining has been touted to be the solution, it has failed to deliver the impact despite its successes in many areas. One reason is that data mining algorithms were not designed for the real world, i.e., they usually assume a static view of the data and a stable execution environment where resourcesare abundant. The reality however is that data are constantly changing and the execution environment is dynamic. Hence, it becomes difficult for data mining to truly deliver timely and relevant results. Recently, the processing of stream data has received many attention. What is interesting is that the methodology to design stream-based algorithms may well be the solution to the above problem. In this entry, we discuss this issue and present an overview of recent works.
  8. Chen, C.-C.; Chen, A.-P.: Using data mining technology to provide a recommendation service in the digital library (2007) 0.01
    0.00880801 = product of:
      0.02642403 = sum of:
        0.02642403 = product of:
          0.05284806 = sum of:
            0.05284806 = weight(_text_:methodology in 2533) [ClassicSimilarity], result of:
              0.05284806 = score(doc=2533,freq=2.0), product of:
                0.21236731 = queryWeight, product of:
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.047143444 = queryNorm
                0.24885213 = fieldWeight in 2533, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.504705 = idf(docFreq=1328, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2533)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - Since library storage has been increasing day by day, it is difficult for readers to find the books which interest them as well as representative booklists. How to utilize meaningful information effectively to improve the service quality of the digital library appears to be very important. The purpose of this paper is to provide a recommendation system architecture to promote digital library services in electronic libraries. Design/methodology/approach - In the proposed architecture, a two-phase data mining process used by association rule and clustering methods is designed to generate a recommendation system. The process considers not only the relationship of a cluster of users but also the associations among the information accessed. Findings - The process considered not only the relationship of a cluster of users but also the associations among the information accessed. With the advanced filter, the recommendation supported by the proposed system architecture would be closely served to meet users' needs. Originality/value - This paper not only constructs a recommendation service for readers to search books from the web but takes the initiative in finding the most suitable books for readers as well. Furthermore, library managers are expected to purchase core and hot books from a limited budget to maintain and satisfy the requirements of readers along with promoting digital library services.
  9. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.01
    0.0085163815 = product of:
      0.025549144 = sum of:
        0.025549144 = product of:
          0.051098287 = sum of:
            0.051098287 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.051098287 = score(doc=1737,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22.11.1998 18:57:22
  10. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.01
    0.0085163815 = product of:
      0.025549144 = sum of:
        0.025549144 = product of:
          0.051098287 = sum of:
            0.051098287 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
              0.051098287 = score(doc=4261,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.30952093 = fieldWeight in 4261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4261)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    17. 7.2002 19:22:06
  11. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.01
    0.0085163815 = product of:
      0.025549144 = sum of:
        0.025549144 = product of:
          0.051098287 = sum of:
            0.051098287 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.051098287 = score(doc=1270,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  12. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.01
    0.0074518337 = product of:
      0.0223555 = sum of:
        0.0223555 = product of:
          0.044711 = sum of:
            0.044711 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.044711 = score(doc=2908,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  13. Lackes, R.; Tillmanns, C.: Data Mining für die Unternehmenspraxis : Entscheidungshilfen und Fallstudien mit führenden Softwarelösungen (2006) 0.01
    0.006387286 = product of:
      0.019161858 = sum of:
        0.019161858 = product of:
          0.038323715 = sum of:
            0.038323715 = weight(_text_:22 in 1383) [ClassicSimilarity], result of:
              0.038323715 = score(doc=1383,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.23214069 = fieldWeight in 1383, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1383)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 14:46:06
  14. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.01
    0.0053227386 = product of:
      0.015968215 = sum of:
        0.015968215 = product of:
          0.03193643 = sum of:
            0.03193643 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
              0.03193643 = score(doc=668,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.19345059 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2013 19:43:01
  15. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.01
    0.0053227386 = product of:
      0.015968215 = sum of:
        0.015968215 = product of:
          0.03193643 = sum of:
            0.03193643 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.03193643 = score(doc=1605,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  16. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.01
    0.0053227386 = product of:
      0.015968215 = sum of:
        0.015968215 = product of:
          0.03193643 = sum of:
            0.03193643 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
              0.03193643 = score(doc=5011,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.19345059 = fieldWeight in 5011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5011)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    7. 3.2019 16:32:22
  17. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.00
    0.0042581907 = product of:
      0.012774572 = sum of:
        0.012774572 = product of:
          0.025549144 = sum of:
            0.025549144 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
              0.025549144 = score(doc=1507,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.15476047 = fieldWeight in 1507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1507)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 4.2003 11:45:36
  18. Hölzig, C.: Google spürt Grippewellen auf : Die neue Anwendung ist bisher auf die USA beschränkt (2008) 0.00
    0.0042581907 = product of:
      0.012774572 = sum of:
        0.012774572 = product of:
          0.025549144 = sum of:
            0.025549144 = weight(_text_:22 in 2403) [ClassicSimilarity], result of:
              0.025549144 = score(doc=2403,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.15476047 = fieldWeight in 2403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2403)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    3. 5.1997 8:44:22
  19. Jäger, L.: Von Big Data zu Big Brother (2018) 0.00
    0.0042581907 = product of:
      0.012774572 = sum of:
        0.012774572 = product of:
          0.025549144 = sum of:
            0.025549144 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
              0.025549144 = score(doc=5234,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.15476047 = fieldWeight in 5234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5234)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2018 11:33:49
  20. Lischka, K.: Spurensuche im Datenwust : Data-Mining-Software fahndet nach kriminellen Mitarbeitern, guten Kunden - und bald vielleicht auch nach Terroristen (2002) 0.00
    0.003193643 = product of:
      0.009580929 = sum of:
        0.009580929 = product of:
          0.019161858 = sum of:
            0.019161858 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
              0.019161858 = score(doc=1178,freq=2.0), product of:
                0.16508831 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047143444 = queryNorm
                0.116070345 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    "Ob man als Terrorist einen Anschlag gegen die Vereinigten Staaten plant, als Kassierer Scheine aus der Kasse unterschlägt oder für bestimmte Produkte besonders gerne Geld ausgibt - einen Unterschied macht Data-Mining-Software da nicht. Solche Programme analysieren riesige Daten- mengen und fällen statistische Urteile. Mit diesen Methoden wollen nun die For- scher des "Information Awaren in den Vereinigten Staaten Spuren von Terroristen in den Datenbanken von Behörden und privaten Unternehmen wie Kreditkartenfirmen finden. 200 Millionen Dollar umfasst der Jahresetat für die verschiedenen Forschungsprojekte. Dass solche Software in der Praxis funktioniert, zeigen die steigenden Umsätze der Anbieter so genannter Customer-Relationship-Management-Software. Im vergangenen Jahr ist das Potenzial für analytische CRM-Anwendungen laut dem Marktforschungsinstitut IDC weltweit um 22 Prozent gewachsen, bis zum Jahr 2006 soll es in Deutschland mit einem jährlichen Plus von 14,1 Prozent so weitergehen. Und das trotz schwacher Konjunktur - oder gerade deswegen. Denn ähnlich wie Data-Mining der USRegierung helfen soll, Terroristen zu finden, entscheiden CRM-Programme heute, welche Kunden für eine Firma profitabel sind. Und welche es künftig sein werden, wie Manuela Schnaubelt, Sprecherin des CRM-Anbieters SAP, beschreibt: "Die Kundenbewertung ist ein zentraler Bestandteil des analytischen CRM. Sie ermöglicht es Unternehmen, sich auf die für sie wichtigen und richtigen Kunden zu fokussieren. Darüber hinaus können Firmen mit speziellen Scoring- Verfahren ermitteln, welche Kunden langfristig in welchem Maße zum Unternehmenserfolg beitragen." Die Folgen der Bewertungen sind für die Betroffenen nicht immer positiv: Attraktive Kunden profitieren von individuellen Sonderangeboten und besonderer Zuwendung. Andere hängen vielleicht so lauge in der Warteschleife des Telefonservice, bis die profitableren Kunden abgearbeitet sind. So könnte eine praktische Umsetzung dessen aussehen, was SAP-Spreche-rin Schnaubelt abstrakt beschreibt: "In vielen Unternehmen wird Kundenbewertung mit der klassischen ABC-Analyse durchgeführt, bei der Kunden anhand von Daten wie dem Umsatz kategorisiert werden. A-Kunden als besonders wichtige Kunden werden anders betreut als C-Kunden." Noch näher am geplanten Einsatz von Data-Mining zur Terroristenjagd ist eine Anwendung, die heute viele Firmen erfolgreich nutzen: Sie spüren betrügende Mitarbeiter auf. Werner Sülzer vom großen CRM-Anbieter NCR Teradata beschreibt die Möglichkeiten so: "Heute hinterlässt praktisch jeder Täter - ob Mitarbeiter, Kunde oder Lieferant - Datenspuren bei seinen wirtschaftskriminellen Handlungen. Es muss vorrangig darum gehen, einzelne Spuren zu Handlungsmustern und Täterprofilen zu verdichten. Das gelingt mittels zentraler Datenlager und hoch entwickelter Such- und Analyseinstrumente." Von konkreten Erfolgen sprich: Entlas-sungen krimineller Mitarbeiter-nach Einsatz solcher Programme erzählen Unternehmen nicht gerne. Matthias Wilke von der "Beratungsstelle für Technologiefolgen und Qualifizierung" (BTQ) der Gewerkschaft Verdi weiß von einem Fall 'aus der Schweiz. Dort setzt die Handelskette "Pick Pay" das Programm "Lord Lose Prevention" ein. Zwei Monate nach Einfüh-rung seien Unterschlagungen im Wert von etwa 200 000 Franken ermittelt worden. Das kostete mehr als 50 verdächtige Kassiererinnen und Kassierer den Job.

Languages

  • e 15
  • d 7

Types