Search (22 results, page 1 of 2)

  • × theme_ss:"Data Mining"
  1. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.03
    0.03461188 = product of:
      0.06922376 = sum of:
        0.06922376 = product of:
          0.13844752 = sum of:
            0.13844752 = weight(_text_:word in 3884) [ClassicSimilarity], result of:
              0.13844752 = score(doc=3884,freq=4.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.49155584 = fieldWeight in 3884, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3884)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
  2. Wang, F.L.; Yang, C.C.: Mining Web data for Chinese segmentation (2007) 0.03
    0.028843235 = product of:
      0.05768647 = sum of:
        0.05768647 = product of:
          0.11537294 = sum of:
            0.11537294 = weight(_text_:word in 604) [ClassicSimilarity], result of:
              0.11537294 = score(doc=604,freq=4.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.40962988 = fieldWeight in 604, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=604)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Modern information retrieval systems use keywords within documents as indexing terms for search of relevant documents. As Chinese is an ideographic character-based language, the words in the texts are not delimited by white spaces. Indexing of Chinese documents is impossible without a proper segmentation algorithm. Many Chinese segmentation algorithms have been proposed in the past. Traditional segmentation algorithms cannot operate without a large dictionary or a large corpus of training data. Nowadays, the Web has become the largest corpus that is ideal for Chinese segmentation. Although most search engines have problems in segmenting texts into proper words, they maintain huge databases of documents and frequencies of character sequences in the documents. Their databases are important potential resources for segmentation. In this paper, we propose a segmentation algorithm by mining Web data with the help of search engines. On the other hand, the Romanized pinyin of Chinese language indicates boundaries of words in the text. Our algorithm is the first to utilize the Romanized pinyin to segmentation. It is the first unified segmentation algorithm for the Chinese language from different geographical areas, and it is also domain independent because of the nature of the Web. Experiments have been conducted on the datasets of a recent Chinese segmentation competition. The results show that our algorithm outperforms the traditional algorithms in terms of precision and recall. Moreover, our algorithm can effectively deal with the problems of segmentation ambiguity, new word (unknown word) detection, and stop words.
  3. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.03
    0.025472634 = product of:
      0.050945267 = sum of:
        0.050945267 = product of:
          0.101890534 = sum of:
            0.101890534 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.101890534 = score(doc=4577,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 4.2000 18:01:22
  4. Organisciak, P.; Schmidt, B.M.; Downie, J.S.: Giving shape to large digital libraries through exploratory data analysis (2022) 0.02
    0.024474295 = product of:
      0.04894859 = sum of:
        0.04894859 = product of:
          0.09789718 = sum of:
            0.09789718 = weight(_text_:word in 473) [ClassicSimilarity], result of:
              0.09789718 = score(doc=473,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.34758246 = fieldWeight in 473, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=473)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The emergence of large multi-institutional digital libraries has opened the door to aggregate-level examinations of the published word. Such large-scale analysis offers a new way to pursue traditional problems in the humanities and social sciences, using digital methods to ask routine questions of large corpora. However, inquiry into multiple centuries of books is constrained by the burdens of scale, where statistical inference is technically complex and limited by hurdles to access and flexibility. This work examines the role that exploratory data analysis and visualization tools may play in understanding large bibliographic datasets. We present one such tool, HathiTrust+Bookworm, which allows multifaceted exploration of the multimillion work HathiTrust Digital Library, and center it in the broader space of scholarly tools for exploratory data analysis.
  5. KDD : techniques and applications (1998) 0.02
    0.021833686 = product of:
      0.043667372 = sum of:
        0.043667372 = product of:
          0.087334745 = sum of:
            0.087334745 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
              0.087334745 = score(doc=6783,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.46428138 = fieldWeight in 6783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6783)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  6. Ku, L.-W.; Chen, H.-H.: Mining opinions from the Web : beyond relevance retrieval (2007) 0.02
    0.020395245 = product of:
      0.04079049 = sum of:
        0.04079049 = product of:
          0.08158098 = sum of:
            0.08158098 = weight(_text_:word in 605) [ClassicSimilarity], result of:
              0.08158098 = score(doc=605,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.28965205 = fieldWeight in 605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=605)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Documents discussing public affairs, common themes, interesting products, and so on, are reported and distributed on the Web. Positive and negative opinions embedded in documents are useful references and feedbacks for governments to improve their services, for companies to market their products, and for customers to purchase their objects. Web opinion mining aims to extract, summarize, and track various aspects of subjective information on the Web. Mining subjective information enables traditional information retrieval (IR) systems to retrieve more data from human viewpoints and provide information with finer granularity. Opinion extraction identifies opinion holders, extracts the relevant opinion sentences, and decides their polarities. Opinion summarization recognizes the major events embedded in documents and summarizes the supportive and the nonsupportive evidence. Opinion tracking captures subjective information from various genres and monitors the developments of opinions from spatial and temporal dimensions. To demonstrate and evaluate the proposed opinion mining algorithms, news and bloggers' articles are adopted. Documents in the evaluation corpora are tagged in different granularities from words, sentences to documents. In the experiments, positive and negative sentiment words and their weights are mined on the basis of Chinese word structures. The f-measure is 73.18% and 63.75% for verbs and nouns, respectively. Utilizing the sentiment words mined together with topical words, we achieve f-measure 62.16% at the sentence level and 74.37% at the document level.
  7. Ayadi, H.; Torjmen-Khemakhem, M.; Daoud, M.; Huang, J.X.; Jemaa, M.B.: Mining correlations between medically dependent features and image retrieval models for query classification (2017) 0.02
    0.020395245 = product of:
      0.04079049 = sum of:
        0.04079049 = product of:
          0.08158098 = sum of:
            0.08158098 = weight(_text_:word in 3607) [ClassicSimilarity], result of:
              0.08158098 = score(doc=3607,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.28965205 = fieldWeight in 3607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3607)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The abundance of medical resources has encouraged the development of systems that allow for efficient searches of information in large medical image data sets. State-of-the-art image retrieval models are classified into three categories: content-based (visual) models, textual models, and combined models. Content-based models use visual features to answer image queries, textual image retrieval models use word matching to answer textual queries, and combined image retrieval models, use both textual and visual features to answer queries. Nevertheless, most of previous works in this field have used the same image retrieval model independently of the query type. In this article, we define a list of generic and specific medical query features and exploit them in an association rule mining technique to discover correlations between query features and image retrieval models. Based on these rules, we propose to use an associative classifier (NaiveClass) to find the best suitable retrieval model given a new textual query. We also propose a second associative classifier (SmartClass) to select the most appropriate default class for the query. Experiments are performed on Medical ImageCLEF queries from 2008 to 2012 to evaluate the impact of the proposed query features on the classification performance. The results show that combining our proposed specific and generic query features is effective in query classification.
  8. Schwartz, F.; Fang, Y.C.: Citation data analysis on hydrogeology (2007) 0.02
    0.016316198 = product of:
      0.032632396 = sum of:
        0.032632396 = product of:
          0.06526479 = sum of:
            0.06526479 = weight(_text_:word in 433) [ClassicSimilarity], result of:
              0.06526479 = score(doc=433,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.23172164 = fieldWeight in 433, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.03125 = fieldNorm(doc=433)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article explores the status of research in hydrogeology using data mining techniques. First we try to explain what citation analysis is and review some of the previous work on citation analysis. The main idea in this article is to address some common issues about citation numbers and the use of these data. To validate the use of citation numbers, we compare the citation patterns for Water Resources Research papers in the 1980s with those in the 1990s. The citation growths for highly cited authors from the 1980s are used to examine whether it is possible to predict the citation patterns for highly-cited authors in the 1990s. If the citation data prove to be steady and stable, these numbers then can be used to explore the evolution of science in hydrogeology. The famous quotation, "If you are not the lead dog, the scenery never changes," attributed to Lee Iacocca, points to the importance of an entrepreneurial spirit in all forms of endeavor. In the case of hydrogeological research, impact analysis makes it clear how important it is to be a pioneer. Statistical correlation coefficients are used to retrieve papers among a collection of 2,847 papers before and after 1991 sharing the same topics with 273 papers in 1991 in Water Resources Research. The numbers of papers before and after 1991 are then plotted against various levels of citations for papers in 1991 to compare the distributions of paper population before and after that year. The similarity metrics based on word counts can ensure that the "before" papers are like ancestors and "after" papers are descendants in the same type of research. This exercise gives us an idea of how many papers are populated before and after 1991 (1991 is chosen based on balanced numbers of papers before and after that year). In addition, the impact of papers is measured in terms of citation presented as "percentile," a relative measure based on rankings in one year, in order to minimize the effect of time.
  9. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.01
    0.014555791 = product of:
      0.029111583 = sum of:
        0.029111583 = product of:
          0.058223166 = sum of:
            0.058223166 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.058223166 = score(doc=1737,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22.11.1998 18:57:22
  10. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.01
    0.014555791 = product of:
      0.029111583 = sum of:
        0.029111583 = product of:
          0.058223166 = sum of:
            0.058223166 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
              0.058223166 = score(doc=4261,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.30952093 = fieldWeight in 4261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4261)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    17. 7.2002 19:22:06
  11. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.01
    0.014555791 = product of:
      0.029111583 = sum of:
        0.029111583 = product of:
          0.058223166 = sum of:
            0.058223166 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.058223166 = score(doc=1270,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  12. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.01
    0.012736317 = product of:
      0.025472634 = sum of:
        0.025472634 = product of:
          0.050945267 = sum of:
            0.050945267 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.050945267 = score(doc=2908,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  13. Lackes, R.; Tillmanns, C.: Data Mining für die Unternehmenspraxis : Entscheidungshilfen und Fallstudien mit führenden Softwarelösungen (2006) 0.01
    0.010916843 = product of:
      0.021833686 = sum of:
        0.021833686 = product of:
          0.043667372 = sum of:
            0.043667372 = weight(_text_:22 in 1383) [ClassicSimilarity], result of:
              0.043667372 = score(doc=1383,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.23214069 = fieldWeight in 1383, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1383)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2008 14:46:06
  14. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.01
    0.009097369 = product of:
      0.018194739 = sum of:
        0.018194739 = product of:
          0.036389478 = sum of:
            0.036389478 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
              0.036389478 = score(doc=668,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.19345059 = fieldWeight in 668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=668)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2013 19:43:01
  15. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.01
    0.009097369 = product of:
      0.018194739 = sum of:
        0.018194739 = product of:
          0.036389478 = sum of:
            0.036389478 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.036389478 = score(doc=1605,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  16. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.01
    0.009097369 = product of:
      0.018194739 = sum of:
        0.018194739 = product of:
          0.036389478 = sum of:
            0.036389478 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
              0.036389478 = score(doc=5011,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.19345059 = fieldWeight in 5011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5011)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    7. 3.2019 16:32:22
  17. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.01
    0.0072778957 = product of:
      0.014555791 = sum of:
        0.014555791 = product of:
          0.029111583 = sum of:
            0.029111583 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
              0.029111583 = score(doc=1507,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.15476047 = fieldWeight in 1507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1507)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 4.2003 11:45:36
  18. Hölzig, C.: Google spürt Grippewellen auf : Die neue Anwendung ist bisher auf die USA beschränkt (2008) 0.01
    0.0072778957 = product of:
      0.014555791 = sum of:
        0.014555791 = product of:
          0.029111583 = sum of:
            0.029111583 = weight(_text_:22 in 2403) [ClassicSimilarity], result of:
              0.029111583 = score(doc=2403,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.15476047 = fieldWeight in 2403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2403)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    3. 5.1997 8:44:22
  19. Jäger, L.: Von Big Data zu Big Brother (2018) 0.01
    0.0072778957 = product of:
      0.014555791 = sum of:
        0.014555791 = product of:
          0.029111583 = sum of:
            0.029111583 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
              0.029111583 = score(doc=5234,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.15476047 = fieldWeight in 5234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5234)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2018 11:33:49
  20. Lischka, K.: Spurensuche im Datenwust : Data-Mining-Software fahndet nach kriminellen Mitarbeitern, guten Kunden - und bald vielleicht auch nach Terroristen (2002) 0.01
    0.0054584215 = product of:
      0.010916843 = sum of:
        0.010916843 = product of:
          0.021833686 = sum of:
            0.021833686 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
              0.021833686 = score(doc=1178,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.116070345 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Ob man als Terrorist einen Anschlag gegen die Vereinigten Staaten plant, als Kassierer Scheine aus der Kasse unterschlägt oder für bestimmte Produkte besonders gerne Geld ausgibt - einen Unterschied macht Data-Mining-Software da nicht. Solche Programme analysieren riesige Daten- mengen und fällen statistische Urteile. Mit diesen Methoden wollen nun die For- scher des "Information Awaren in den Vereinigten Staaten Spuren von Terroristen in den Datenbanken von Behörden und privaten Unternehmen wie Kreditkartenfirmen finden. 200 Millionen Dollar umfasst der Jahresetat für die verschiedenen Forschungsprojekte. Dass solche Software in der Praxis funktioniert, zeigen die steigenden Umsätze der Anbieter so genannter Customer-Relationship-Management-Software. Im vergangenen Jahr ist das Potenzial für analytische CRM-Anwendungen laut dem Marktforschungsinstitut IDC weltweit um 22 Prozent gewachsen, bis zum Jahr 2006 soll es in Deutschland mit einem jährlichen Plus von 14,1 Prozent so weitergehen. Und das trotz schwacher Konjunktur - oder gerade deswegen. Denn ähnlich wie Data-Mining der USRegierung helfen soll, Terroristen zu finden, entscheiden CRM-Programme heute, welche Kunden für eine Firma profitabel sind. Und welche es künftig sein werden, wie Manuela Schnaubelt, Sprecherin des CRM-Anbieters SAP, beschreibt: "Die Kundenbewertung ist ein zentraler Bestandteil des analytischen CRM. Sie ermöglicht es Unternehmen, sich auf die für sie wichtigen und richtigen Kunden zu fokussieren. Darüber hinaus können Firmen mit speziellen Scoring- Verfahren ermitteln, welche Kunden langfristig in welchem Maße zum Unternehmenserfolg beitragen." Die Folgen der Bewertungen sind für die Betroffenen nicht immer positiv: Attraktive Kunden profitieren von individuellen Sonderangeboten und besonderer Zuwendung. Andere hängen vielleicht so lauge in der Warteschleife des Telefonservice, bis die profitableren Kunden abgearbeitet sind. So könnte eine praktische Umsetzung dessen aussehen, was SAP-Spreche-rin Schnaubelt abstrakt beschreibt: "In vielen Unternehmen wird Kundenbewertung mit der klassischen ABC-Analyse durchgeführt, bei der Kunden anhand von Daten wie dem Umsatz kategorisiert werden. A-Kunden als besonders wichtige Kunden werden anders betreut als C-Kunden." Noch näher am geplanten Einsatz von Data-Mining zur Terroristenjagd ist eine Anwendung, die heute viele Firmen erfolgreich nutzen: Sie spüren betrügende Mitarbeiter auf. Werner Sülzer vom großen CRM-Anbieter NCR Teradata beschreibt die Möglichkeiten so: "Heute hinterlässt praktisch jeder Täter - ob Mitarbeiter, Kunde oder Lieferant - Datenspuren bei seinen wirtschaftskriminellen Handlungen. Es muss vorrangig darum gehen, einzelne Spuren zu Handlungsmustern und Täterprofilen zu verdichten. Das gelingt mittels zentraler Datenlager und hoch entwickelter Such- und Analyseinstrumente." Von konkreten Erfolgen sprich: Entlas-sungen krimineller Mitarbeiter-nach Einsatz solcher Programme erzählen Unternehmen nicht gerne. Matthias Wilke von der "Beratungsstelle für Technologiefolgen und Qualifizierung" (BTQ) der Gewerkschaft Verdi weiß von einem Fall 'aus der Schweiz. Dort setzt die Handelskette "Pick Pay" das Programm "Lord Lose Prevention" ein. Zwei Monate nach Einfüh-rung seien Unterschlagungen im Wert von etwa 200 000 Franken ermittelt worden. Das kostete mehr als 50 verdächtige Kassiererinnen und Kassierer den Job.

Years

Languages

  • e 15
  • d 7

Types