Search (21 results, page 1 of 2)

  • × theme_ss:"Data Mining"
  1. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.02
    0.024515856 = product of:
      0.049031712 = sum of:
        0.049031712 = product of:
          0.098063424 = sum of:
            0.098063424 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.098063424 = score(doc=4577,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 4.2000 18:01:22
  2. KDD : techniques and applications (1998) 0.02
    0.02101359 = product of:
      0.04202718 = sum of:
        0.04202718 = product of:
          0.08405436 = sum of:
            0.08405436 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
              0.08405436 = score(doc=6783,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.46428138 = fieldWeight in 6783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6783)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  3. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.01
    0.014021028 = product of:
      0.028042056 = sum of:
        0.028042056 = product of:
          0.05608411 = sum of:
            0.05608411 = weight(_text_:searching in 4242) [ClassicSimilarity], result of:
              0.05608411 = score(doc=4242,freq=2.0), product of:
                0.2091384 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.051699217 = queryNorm
                0.26816747 = fieldWeight in 4242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4242)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    With more than two billion pages created by millions of Web page authors and organizations, the World Wide Web is a tremendously rich knowledge base. The knowledge comes not only from the content of the pages themselves, but also from the unique characteristics of the Web, such as its hyperlink structure and its diversity of content and languages. Analysis of these characteristics often reveals interesting patterns and new knowledge. Such knowledge can be used to improve users' efficiency and effectiveness in searching for information an the Web, and also for applications unrelated to the Web, such as support for decision making or business management. The Web's size and its unstructured and dynamic content, as well as its multilingual nature, make the extraction of useful knowledge a challenging research problem. Furthermore, the Web generates a large amount of data in other formats that contain valuable information. For example, Web server logs' information about user access patterns can be used for information personalization or improving Web page design.
  4. Whittle, M.; Eaglestone, B.; Ford, N.; Gillet, V.J.; Madden, A.: Data mining of search engine logs (2007) 0.01
    0.014021028 = product of:
      0.028042056 = sum of:
        0.028042056 = product of:
          0.05608411 = sum of:
            0.05608411 = weight(_text_:searching in 1330) [ClassicSimilarity], result of:
              0.05608411 = score(doc=1330,freq=2.0), product of:
                0.2091384 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.051699217 = queryNorm
                0.26816747 = fieldWeight in 1330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1330)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article reports on the development of a novel method for the analysis of Web logs. The method uses techniques that look for similarities between queries and identify sequences of query transformation. It allows sequences of query transformations to be represented as graphical networks, thereby giving a richer view of search behavior than is possible with the usual sequential descriptions. We also perform a basic analysis to study the correlations between observed transformation codes, with results that appear to show evidence of behavior habits. The method was developed using transaction logs from the Excite search engine to provide a tool for an ongoing research project that is endeavoring to develop a greater understanding of Web-based searching by the general public.
  5. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.01
    0.01400906 = product of:
      0.02801812 = sum of:
        0.02801812 = product of:
          0.05603624 = sum of:
            0.05603624 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.05603624 = score(doc=1737,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22.11.1998 18:57:22
  6. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.01
    0.01400906 = product of:
      0.02801812 = sum of:
        0.02801812 = product of:
          0.05603624 = sum of:
            0.05603624 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
              0.05603624 = score(doc=4261,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.30952093 = fieldWeight in 4261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4261)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    17. 7.2002 19:22:06
  7. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.01
    0.01400906 = product of:
      0.02801812 = sum of:
        0.02801812 = product of:
          0.05603624 = sum of:
            0.05603624 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.05603624 = score(doc=1270,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  8. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.01
    0.012257928 = product of:
      0.024515856 = sum of:
        0.024515856 = product of:
          0.049031712 = sum of:
            0.049031712 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.049031712 = score(doc=2908,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  9. Zhang, Z.; Li, Q.; Zeng, D.; Ga, H.: Extracting evolutionary communities in community question answering (2014) 0.01
    0.01168419 = product of:
      0.02336838 = sum of:
        0.02336838 = product of:
          0.04673676 = sum of:
            0.04673676 = weight(_text_:searching in 1286) [ClassicSimilarity], result of:
              0.04673676 = score(doc=1286,freq=2.0), product of:
                0.2091384 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.051699217 = queryNorm
                0.22347288 = fieldWeight in 1286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1286)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    With the rapid growth of Web 2.0, community question answering (CQA) has become a prevalent information seeking channel, in which users form interactive communities by posting questions and providing answers. Communities may evolve over time, because of changes in users' interests, activities, and new users joining the network. To better understand user interactions in CQA communities, it is necessary to analyze the community structures and track community evolution over time. Existing work in CQA focuses on question searching or content quality detection, and the important problems of community extraction and evolutionary pattern detection have not been studied. In this article, we propose a probabilistic community model (PCM) to extract overlapping community structures and capture their evolution patterns in CQA. The empirical results show that our algorithm appears to improve the community extraction quality. We show empirically, using the iPhone data set, that interesting community evolution patterns can be discovered, with each evolution pattern reflecting the variation of users' interests over time. Our analysis suggests that individual users could benefit to gain comprehensive information from tracking the transition of products. We also show that the communities provide a decision-making basis for business.
  10. Lackes, R.; Tillmanns, C.: Data Mining für die Unternehmenspraxis : Entscheidungshilfen und Fallstudien mit führenden Softwarelösungen (2006) 0.01
    0.010506795 = product of:
      0.02101359 = sum of:
        0.02101359 = product of:
          0.04202718 = sum of:
            0.04202718 = weight(_text_:22 in 1383) [ClassicSimilarity], result of:
              0.04202718 = score(doc=1383,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.23214069 = fieldWeight in 1383, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1383)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2008 14:46:06
  11. Chakrabarti, S.: Mining the Web : discovering knowledge from hypertext data (2003) 0.01
    0.009347352 = product of:
      0.018694704 = sum of:
        0.018694704 = product of:
          0.03738941 = sum of:
            0.03738941 = weight(_text_:searching in 2222) [ClassicSimilarity], result of:
              0.03738941 = score(doc=2222,freq=2.0), product of:
                0.2091384 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.051699217 = queryNorm
                0.1787783 = fieldWeight in 2222, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2222)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 55(2004) no.3, S.275-276 (C. Chen): "This is a book about finding significant statistical patterns on the Web - in particular, patterns that are associated with hypertext documents, topics, hyperlinks, and queries. The term pattern in this book refers to dependencies among such items. On the one hand, the Web contains useful information an just about every topic under the sun. On the other hand, just like searching for a needle in a haystack, one would need powerful tools to locate useful information an the vast land of the Web. Soumen Chakrabarti's book focuses an a wide range of techniques for machine learning and data mining an the Web. The goal of the book is to provide both the technical Background and tools and tricks of the trade of Web content mining. Much of the technical content reflects the state of the art between 1995 and 2002. The targeted audience is researchers and innovative developers in this area, as well as newcomers who intend to enter this area. The book begins with an introduction chapter. The introduction chapter explains fundamental concepts such as crawling and indexing as well as clustering and classification. The remaining eight chapters are organized into three parts: i) infrastructure, ii) learning and iii) applications.
  12. Cohen, D.J.: From Babel to knowledge : data mining large digital collections (2006) 0.01
    0.009347352 = product of:
      0.018694704 = sum of:
        0.018694704 = product of:
          0.03738941 = sum of:
            0.03738941 = weight(_text_:searching in 1178) [ClassicSimilarity], result of:
              0.03738941 = score(doc=1178,freq=2.0), product of:
                0.2091384 = queryWeight, product of:
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.051699217 = queryNorm
                0.1787783 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0452914 = idf(docFreq=2103, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In Jorge Luis Borges's curious short story The Library of Babel, the narrator describes an endless collection of books stored from floor to ceiling in a labyrinth of countless hexagonal rooms. The pages of the library's books seem to contain random sequences of letters and spaces; occasionally a few intelligible words emerge in the sea of paper and ink. Nevertheless, readers diligently, and exasperatingly, scan the shelves for coherent passages. The narrator himself has wandered numerous rooms in search of enlightenment, but with resignation he simply awaits his death and burial - which Borges explains (with signature dark humor) consists of being tossed unceremoniously over the library's banister. Borges's nightmare, of course, is a cursed vision of the research methods of disciplines such as literature, history, and philosophy, where the careful reading of books, one after the other, is supposed to lead inexorably to knowledge and understanding. Computer scientists would approach Borges's library far differently. Employing the information theory that forms the basis for search engines and other computerized techniques for assessing in one fell swoop large masses of documents, they would quickly realize the collection's incoherence though sampling and statistical methods - and wisely start looking for the library's exit. These computational methods, which allow us to find patterns, determine relationships, categorize documents, and extract information from massive corpuses, will form the basis for new tools for research in the humanities and other disciplines in the coming decade. For the past three years I have been experimenting with how to provide such end-user tools - that is, tools that harness the power of vast electronic collections while hiding much of their complicated technical plumbing. In particular, I have made extensive use of the application programming interfaces (APIs) the leading search engines provide for programmers to query their databases directly (from server to server without using their web interfaces). In addition, I have explored how one might extract information from large digital collections, from the well-curated lexicographic database WordNet to the democratic (and poorly curated) online reference work Wikipedia. While processing these digital corpuses is currently an imperfect science, even now useful tools can be created by combining various collections and methods for searching and analyzing them. And more importantly, these nascent services suggest a future in which information can be gleaned from, and sense can be made out of, even imperfect digital libraries of enormous scale. A brief examination of two approaches to data mining large digital collections hints at this future, while also providing some lessons about how to get there.
  13. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.01
    0.008755663 = product of:
      0.017511327 = sum of:
        0.017511327 = product of:
          0.035022654 = sum of:
            0.035022654 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
              0.035022654 = score(doc=668,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.19345059 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2013 19:43:01
  14. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.01
    0.008755663 = product of:
      0.017511327 = sum of:
        0.017511327 = product of:
          0.035022654 = sum of:
            0.035022654 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.035022654 = score(doc=1605,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  15. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.01
    0.008755663 = product of:
      0.017511327 = sum of:
        0.017511327 = product of:
          0.035022654 = sum of:
            0.035022654 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
              0.035022654 = score(doc=5011,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.19345059 = fieldWeight in 5011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5011)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7. 3.2019 16:32:22
  16. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.01
    0.00700453 = product of:
      0.01400906 = sum of:
        0.01400906 = product of:
          0.02801812 = sum of:
            0.02801812 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
              0.02801812 = score(doc=1507,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.15476047 = fieldWeight in 1507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1507)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 4.2003 11:45:36
  17. Hölzig, C.: Google spürt Grippewellen auf : Die neue Anwendung ist bisher auf die USA beschränkt (2008) 0.01
    0.00700453 = product of:
      0.01400906 = sum of:
        0.01400906 = product of:
          0.02801812 = sum of:
            0.02801812 = weight(_text_:22 in 2403) [ClassicSimilarity], result of:
              0.02801812 = score(doc=2403,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.15476047 = fieldWeight in 2403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2403)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    3. 5.1997 8:44:22
  18. Jäger, L.: Von Big Data zu Big Brother (2018) 0.01
    0.00700453 = product of:
      0.01400906 = sum of:
        0.01400906 = product of:
          0.02801812 = sum of:
            0.02801812 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
              0.02801812 = score(doc=5234,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.15476047 = fieldWeight in 5234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5234)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2018 11:33:49
  19. Lischka, K.: Spurensuche im Datenwust : Data-Mining-Software fahndet nach kriminellen Mitarbeitern, guten Kunden - und bald vielleicht auch nach Terroristen (2002) 0.01
    0.0052533974 = product of:
      0.010506795 = sum of:
        0.010506795 = product of:
          0.02101359 = sum of:
            0.02101359 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
              0.02101359 = score(doc=1178,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.116070345 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Ob man als Terrorist einen Anschlag gegen die Vereinigten Staaten plant, als Kassierer Scheine aus der Kasse unterschlägt oder für bestimmte Produkte besonders gerne Geld ausgibt - einen Unterschied macht Data-Mining-Software da nicht. Solche Programme analysieren riesige Daten- mengen und fällen statistische Urteile. Mit diesen Methoden wollen nun die For- scher des "Information Awaren in den Vereinigten Staaten Spuren von Terroristen in den Datenbanken von Behörden und privaten Unternehmen wie Kreditkartenfirmen finden. 200 Millionen Dollar umfasst der Jahresetat für die verschiedenen Forschungsprojekte. Dass solche Software in der Praxis funktioniert, zeigen die steigenden Umsätze der Anbieter so genannter Customer-Relationship-Management-Software. Im vergangenen Jahr ist das Potenzial für analytische CRM-Anwendungen laut dem Marktforschungsinstitut IDC weltweit um 22 Prozent gewachsen, bis zum Jahr 2006 soll es in Deutschland mit einem jährlichen Plus von 14,1 Prozent so weitergehen. Und das trotz schwacher Konjunktur - oder gerade deswegen. Denn ähnlich wie Data-Mining der USRegierung helfen soll, Terroristen zu finden, entscheiden CRM-Programme heute, welche Kunden für eine Firma profitabel sind. Und welche es künftig sein werden, wie Manuela Schnaubelt, Sprecherin des CRM-Anbieters SAP, beschreibt: "Die Kundenbewertung ist ein zentraler Bestandteil des analytischen CRM. Sie ermöglicht es Unternehmen, sich auf die für sie wichtigen und richtigen Kunden zu fokussieren. Darüber hinaus können Firmen mit speziellen Scoring- Verfahren ermitteln, welche Kunden langfristig in welchem Maße zum Unternehmenserfolg beitragen." Die Folgen der Bewertungen sind für die Betroffenen nicht immer positiv: Attraktive Kunden profitieren von individuellen Sonderangeboten und besonderer Zuwendung. Andere hängen vielleicht so lauge in der Warteschleife des Telefonservice, bis die profitableren Kunden abgearbeitet sind. So könnte eine praktische Umsetzung dessen aussehen, was SAP-Spreche-rin Schnaubelt abstrakt beschreibt: "In vielen Unternehmen wird Kundenbewertung mit der klassischen ABC-Analyse durchgeführt, bei der Kunden anhand von Daten wie dem Umsatz kategorisiert werden. A-Kunden als besonders wichtige Kunden werden anders betreut als C-Kunden." Noch näher am geplanten Einsatz von Data-Mining zur Terroristenjagd ist eine Anwendung, die heute viele Firmen erfolgreich nutzen: Sie spüren betrügende Mitarbeiter auf. Werner Sülzer vom großen CRM-Anbieter NCR Teradata beschreibt die Möglichkeiten so: "Heute hinterlässt praktisch jeder Täter - ob Mitarbeiter, Kunde oder Lieferant - Datenspuren bei seinen wirtschaftskriminellen Handlungen. Es muss vorrangig darum gehen, einzelne Spuren zu Handlungsmustern und Täterprofilen zu verdichten. Das gelingt mittels zentraler Datenlager und hoch entwickelter Such- und Analyseinstrumente." Von konkreten Erfolgen sprich: Entlas-sungen krimineller Mitarbeiter-nach Einsatz solcher Programme erzählen Unternehmen nicht gerne. Matthias Wilke von der "Beratungsstelle für Technologiefolgen und Qualifizierung" (BTQ) der Gewerkschaft Verdi weiß von einem Fall 'aus der Schweiz. Dort setzt die Handelskette "Pick Pay" das Programm "Lord Lose Prevention" ein. Zwei Monate nach Einfüh-rung seien Unterschlagungen im Wert von etwa 200 000 Franken ermittelt worden. Das kostete mehr als 50 verdächtige Kassiererinnen und Kassierer den Job.
  20. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.01
    0.0052533974 = product of:
      0.010506795 = sum of:
        0.010506795 = product of:
          0.02101359 = sum of:
            0.02101359 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
              0.02101359 = score(doc=1833,freq=2.0), product of:
                0.18104185 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051699217 = queryNorm
                0.116070345 = fieldWeight in 1833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1833)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 5.2008 19:49:22

Languages

  • e 14
  • d 7

Types