Search (41 results, page 1 of 3)

  • × theme_ss:"Data Mining"
  1. Keim, D.A.: Data Mining mit bloßem Auge (2002) 0.03
    0.03066174 = product of:
      0.09198522 = sum of:
        0.07164246 = product of:
          0.14328492 = sum of:
            0.14328492 = weight(_text_:2002 in 1086) [ClassicSimilarity], result of:
              0.14328492 = score(doc=1086,freq=5.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                0.8985933 = fieldWeight in 1086, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1086)
          0.5 = coord(1/2)
        0.02034276 = product of:
          0.06102828 = sum of:
            0.06102828 = weight(_text_:29 in 1086) [ClassicSimilarity], result of:
              0.06102828 = score(doc=1086,freq=2.0), product of:
                0.13085419 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037198927 = queryNorm
                0.46638384 = fieldWeight in 1086, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1086)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    31.12.1996 19:29:41
    Source
    Spektrum der Wissenschaft. 2002, H.11, S.88-91
    Year
    2002
  2. Kruse, R.; Borgelt, C.: Suche im Datendschungel (2002) 0.03
    0.03066174 = product of:
      0.09198522 = sum of:
        0.07164246 = product of:
          0.14328492 = sum of:
            0.14328492 = weight(_text_:2002 in 1087) [ClassicSimilarity], result of:
              0.14328492 = score(doc=1087,freq=5.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                0.8985933 = fieldWeight in 1087, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1087)
          0.5 = coord(1/2)
        0.02034276 = product of:
          0.06102828 = sum of:
            0.06102828 = weight(_text_:29 in 1087) [ClassicSimilarity], result of:
              0.06102828 = score(doc=1087,freq=2.0), product of:
                0.13085419 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037198927 = queryNorm
                0.46638384 = fieldWeight in 1087, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1087)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    31.12.1996 19:29:41
    Source
    Spektrum der Wissenschaft. 2002, H.11, S.80-81
    Year
    2002
  3. Wrobel, S.: Lern- und Entdeckungsverfahren (2002) 0.03
    0.03066174 = product of:
      0.09198522 = sum of:
        0.07164246 = product of:
          0.14328492 = sum of:
            0.14328492 = weight(_text_:2002 in 1105) [ClassicSimilarity], result of:
              0.14328492 = score(doc=1105,freq=5.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                0.8985933 = fieldWeight in 1105, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1105)
          0.5 = coord(1/2)
        0.02034276 = product of:
          0.06102828 = sum of:
            0.06102828 = weight(_text_:29 in 1105) [ClassicSimilarity], result of:
              0.06102828 = score(doc=1105,freq=2.0), product of:
                0.13085419 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037198927 = queryNorm
                0.46638384 = fieldWeight in 1105, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1105)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    31.12.1996 19:29:41
    Source
    Spektrum der Wissenschaft. 2002, H.11, S.85-87
    Year
    2002
  4. Borgelt, C.; Kruse, R.: Unsicheres Wissen nutzen (2002) 0.03
    0.025551451 = product of:
      0.07665435 = sum of:
        0.05970205 = product of:
          0.1194041 = sum of:
            0.1194041 = weight(_text_:2002 in 1104) [ClassicSimilarity], result of:
              0.1194041 = score(doc=1104,freq=5.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                0.74882776 = fieldWeight in 1104, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1104)
          0.5 = coord(1/2)
        0.016952302 = product of:
          0.050856903 = sum of:
            0.050856903 = weight(_text_:29 in 1104) [ClassicSimilarity], result of:
              0.050856903 = score(doc=1104,freq=2.0), product of:
                0.13085419 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037198927 = queryNorm
                0.38865322 = fieldWeight in 1104, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1104)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    31.12.1996 19:29:41
    Source
    Spektrum der Wissenschaft. 2002, H.11, S.82-84
    Year
    2002
  5. Tiefschürfen in Datenbanken (2002) 0.02
    0.02044116 = product of:
      0.06132348 = sum of:
        0.047761638 = product of:
          0.095523275 = sum of:
            0.095523275 = weight(_text_:2002 in 996) [ClassicSimilarity], result of:
              0.095523275 = score(doc=996,freq=5.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                0.5990622 = fieldWeight in 996, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.0625 = fieldNorm(doc=996)
          0.5 = coord(1/2)
        0.01356184 = product of:
          0.04068552 = sum of:
            0.04068552 = weight(_text_:29 in 996) [ClassicSimilarity], result of:
              0.04068552 = score(doc=996,freq=2.0), product of:
                0.13085419 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037198927 = queryNorm
                0.31092256 = fieldWeight in 996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=996)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    31.12.1996 19:29:41
    Source
    Spektrum der Wissenschaft. 2002, H.11, S.80-91
    Year
    2002
  6. Bath, P.A.: Data mining in health and medical information (2003) 0.02
    0.018760383 = product of:
      0.05628115 = sum of:
        0.04271931 = product of:
          0.08543862 = sum of:
            0.08543862 = weight(_text_:2002 in 4263) [ClassicSimilarity], result of:
              0.08543862 = score(doc=4263,freq=4.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                0.5358175 = fieldWeight in 4263, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4263)
          0.5 = coord(1/2)
        0.01356184 = product of:
          0.04068552 = sum of:
            0.04068552 = weight(_text_:29 in 4263) [ClassicSimilarity], result of:
              0.04068552 = score(doc=4263,freq=2.0), product of:
                0.13085419 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037198927 = queryNorm
                0.31092256 = fieldWeight in 4263, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4263)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Data mining (DM) is part of a process by which information can be extracted from data or databases and used to inform decision making in a variety of contexts (Benoit, 2002; Michalski, Bratka & Kubat, 1997). DM includes a range of tools and methods for extractiog information; their use in the commercial sector for knowledge extraction and discovery has been one of the main driving forces in their development (Adriaans & Zantinge, 1996; Benoit, 2002). DM has been developed and applied in numerous areas. This review describes its use in analyzing health and medical information.
    Date
    23.10.2005 18:29:03
  7. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.01
    0.014618623 = product of:
      0.04385587 = sum of:
        0.037135947 = weight(_text_:geschichte in 1507) [ClassicSimilarity], result of:
          0.037135947 = score(doc=1507,freq=2.0), product of:
            0.17679906 = queryWeight, product of:
              4.7528 = idf(docFreq=1036, maxDocs=44218)
              0.037198927 = queryNorm
            0.21004607 = fieldWeight in 1507, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7528 = idf(docFreq=1036, maxDocs=44218)
              0.03125 = fieldNorm(doc=1507)
        0.006719922 = product of:
          0.020159766 = sum of:
            0.020159766 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
              0.020159766 = score(doc=1507,freq=2.0), product of:
                0.13026431 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037198927 = queryNorm
                0.15476047 = fieldWeight in 1507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1507)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Wir werden einmal die Grundlagen des Text-Mining-Systems bei IBM darstellen, dann werden wir das Projekt etwas umfangreicher und deutlicher darstellen, da kennen wir uns aus. Von daher haben wir zwei Teile, einmal Heidelberg, einmal Hamburg. Noch einmal zur Technologie. Text-Mining ist eine von IBM entwickelte Technologie, die in einer besonderen Ausformung und Programmierung für uns zusammengestellt wurde. Das Projekt hieß bei uns lange Zeit DocText Miner und heißt seit einiger Zeit auf Vorschlag von IBM DocCat, das soll eine Abkürzung für Document-Categoriser sein, sie ist ja auch nett und anschaulich. Wir fangen an mit Text-Mining, das bei IBM in Heidelberg entwickelt wurde. Die verstehen darunter das automatische Indexieren als eine Instanz, also einen Teil von Text-Mining. Probleme werden dabei gezeigt, und das Text-Mining ist eben eine Methode zur Strukturierung von und der Suche in großen Dokumentenmengen, die Extraktion von Informationen und, das ist der hohe Anspruch, von impliziten Zusammenhängen. Das letztere sei dahingestellt. IBM macht das quantitativ, empirisch, approximativ und schnell. das muss man wirklich sagen. Das Ziel, und das ist ganz wichtig für unser Projekt gewesen, ist nicht, den Text zu verstehen, sondern das Ergebnis dieser Verfahren ist, was sie auf Neudeutsch a bundle of words, a bag of words nennen, also eine Menge von bedeutungstragenden Begriffen aus einem Text zu extrahieren, aufgrund von Algorithmen, also im Wesentlichen aufgrund von Rechenoperationen. Es gibt eine ganze Menge von linguistischen Vorstudien, ein wenig Linguistik ist auch dabei, aber nicht die Grundlage der ganzen Geschichte. Was sie für uns gemacht haben, ist also die Annotierung von Pressetexten für unsere Pressedatenbank. Für diejenigen, die es noch nicht kennen: Gruner + Jahr führt eine Textdokumentation, die eine Datenbank führt, seit Anfang der 70er Jahre, da sind z.Z. etwa 6,5 Millionen Dokumente darin, davon etwas über 1 Million Volltexte ab 1993. Das Prinzip war lange Zeit, dass wir die Dokumente, die in der Datenbank gespeichert waren und sind, verschlagworten und dieses Prinzip haben wir auch dann, als der Volltext eingeführt wurde, in abgespeckter Form weitergeführt. Zu diesen 6,5 Millionen Dokumenten gehören dann eben auch ungefähr 10 Millionen Faksimileseiten, weil wir die Faksimiles auch noch standardmäßig aufheben.
    Date
    22. 4.2003 11:45:36
  8. Raan, A.F.J. van; Noyons, E.C.M.: Discovery of patterns of scientific and technological development and knowledge transfer (2002) 0.01
    0.014598787 = product of:
      0.04379636 = sum of:
        0.035320207 = product of:
          0.070640415 = sum of:
            0.070640415 = weight(_text_:2002 in 3603) [ClassicSimilarity], result of:
              0.070640415 = score(doc=3603,freq=7.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                0.44301245 = fieldWeight in 3603, product of:
                  2.6457512 = tf(freq=7.0), with freq of:
                    7.0 = termFreq=7.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3603)
          0.5 = coord(1/2)
        0.008476151 = product of:
          0.025428452 = sum of:
            0.025428452 = weight(_text_:29 in 3603) [ClassicSimilarity], result of:
              0.025428452 = score(doc=3603,freq=2.0), product of:
                0.13085419 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037198927 = queryNorm
                0.19432661 = fieldWeight in 3603, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3603)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper addresses a bibliometric methodology to discover the structure of the scientific 'landscape' in order to gain detailed insight into the development of MD fields, their interaction, and the transfer of knowledge between them. This methodology is appropriate to visualize the position of MD activities in relation to interdisciplinary MD developments, and particularly in relation to socio-economic problems. Furthermore, it allows the identification of the major actors. It even provides the possibility of foresight. We describe a first approach to apply bibliometric mapping as an instrument to investigate characteristics of knowledge transfer. In this paper we discuss the creation of 'maps of science' with help of advanced bibliometric methods. This 'bibliometric cartography' can be seen as a specific type of data-mining, applied to large amounts of scientific publications. As an example we describe the mapping of the field neuroscience, one of the largest and fast growing fields in the life sciences. The number of publications covered by this database is about 80,000 per year, the period covered is 1995-1998. Current research is going an to update the mapping for the years 1999-2002. This paper addresses the main lines of the methodology and its application in the study of knowledge transfer.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
    Year
    2002
  9. Fong, A.C.M.: Mining a Web citation database for document clustering (2002) 0.01
    0.013930479 = product of:
      0.08358287 = sum of:
        0.08358287 = product of:
          0.16716574 = sum of:
            0.16716574 = weight(_text_:2002 in 3940) [ClassicSimilarity], result of:
              0.16716574 = score(doc=3940,freq=5.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                1.0483589 = fieldWeight in 3940, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3940)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Applied artificial intelligence. 16(2002) no.4, S.283-292
    Year
    2002
  10. Handbuch Web Mining im Marketing : Konzepte, Systeme, Fallstudien (2002) 0.01
    0.010790501 = product of:
      0.064743005 = sum of:
        0.064743005 = product of:
          0.12948601 = sum of:
            0.12948601 = weight(_text_:2002 in 6106) [ClassicSimilarity], result of:
              0.12948601 = score(doc=6106,freq=3.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                0.81205523 = fieldWeight in 6106, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6106)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Year
    2002
  11. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.01
    0.009000562 = product of:
      0.05400337 = sum of:
        0.05400337 = product of:
          0.08100505 = sum of:
            0.04068552 = weight(_text_:29 in 1270) [ClassicSimilarity], result of:
              0.04068552 = score(doc=1270,freq=2.0), product of:
                0.13085419 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037198927 = queryNorm
                0.31092256 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
            0.040319532 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.040319532 = score(doc=1270,freq=2.0), product of:
                0.13026431 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037198927 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    5. 4.1996 15:29:15
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  12. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.01
    0.007875491 = product of:
      0.047252946 = sum of:
        0.047252946 = product of:
          0.070879415 = sum of:
            0.035599828 = weight(_text_:29 in 2908) [ClassicSimilarity], result of:
              0.035599828 = score(doc=2908,freq=2.0), product of:
                0.13085419 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037198927 = queryNorm
                0.27205724 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
            0.03527959 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.03527959 = score(doc=2908,freq=2.0), product of:
                0.13026431 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037198927 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    5. 4.1996 15:29:15
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  13. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.01
    0.0070198937 = product of:
      0.02105968 = sum of:
        0.01601974 = product of:
          0.03203948 = sum of:
            0.03203948 = weight(_text_:2002 in 1833) [ClassicSimilarity], result of:
              0.03203948 = score(doc=1833,freq=4.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                0.20093156 = fieldWeight in 1833, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1833)
          0.5 = coord(1/2)
        0.0050399415 = product of:
          0.015119824 = sum of:
            0.015119824 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
              0.015119824 = score(doc=1833,freq=2.0), product of:
                0.13026431 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037198927 = queryNorm
                0.116070345 = fieldWeight in 1833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1833)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Classification
    P96.A72M43 2002
    Date
    11. 5.2008 19:49:22
    LCC
    P96.A72M43 2002
  14. Lischka, K.: Spurensuche im Datenwust : Data-Mining-Software fahndet nach kriminellen Mitarbeitern, guten Kunden - und bald vielleicht auch nach Terroristen (2002) 0.01
    0.006304481 = product of:
      0.018913442 = sum of:
        0.013873501 = product of:
          0.027747001 = sum of:
            0.027747001 = weight(_text_:2002 in 1178) [ClassicSimilarity], result of:
              0.027747001 = score(doc=1178,freq=3.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                0.17401183 = fieldWeight in 1178, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
        0.0050399415 = product of:
          0.015119824 = sum of:
            0.015119824 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
              0.015119824 = score(doc=1178,freq=2.0), product of:
                0.13026431 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037198927 = queryNorm
                0.116070345 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1178)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Content
    "Ob man als Terrorist einen Anschlag gegen die Vereinigten Staaten plant, als Kassierer Scheine aus der Kasse unterschlägt oder für bestimmte Produkte besonders gerne Geld ausgibt - einen Unterschied macht Data-Mining-Software da nicht. Solche Programme analysieren riesige Daten- mengen und fällen statistische Urteile. Mit diesen Methoden wollen nun die For- scher des "Information Awaren in den Vereinigten Staaten Spuren von Terroristen in den Datenbanken von Behörden und privaten Unternehmen wie Kreditkartenfirmen finden. 200 Millionen Dollar umfasst der Jahresetat für die verschiedenen Forschungsprojekte. Dass solche Software in der Praxis funktioniert, zeigen die steigenden Umsätze der Anbieter so genannter Customer-Relationship-Management-Software. Im vergangenen Jahr ist das Potenzial für analytische CRM-Anwendungen laut dem Marktforschungsinstitut IDC weltweit um 22 Prozent gewachsen, bis zum Jahr 2006 soll es in Deutschland mit einem jährlichen Plus von 14,1 Prozent so weitergehen. Und das trotz schwacher Konjunktur - oder gerade deswegen. Denn ähnlich wie Data-Mining der USRegierung helfen soll, Terroristen zu finden, entscheiden CRM-Programme heute, welche Kunden für eine Firma profitabel sind. Und welche es künftig sein werden, wie Manuela Schnaubelt, Sprecherin des CRM-Anbieters SAP, beschreibt: "Die Kundenbewertung ist ein zentraler Bestandteil des analytischen CRM. Sie ermöglicht es Unternehmen, sich auf die für sie wichtigen und richtigen Kunden zu fokussieren. Darüber hinaus können Firmen mit speziellen Scoring- Verfahren ermitteln, welche Kunden langfristig in welchem Maße zum Unternehmenserfolg beitragen." Die Folgen der Bewertungen sind für die Betroffenen nicht immer positiv: Attraktive Kunden profitieren von individuellen Sonderangeboten und besonderer Zuwendung. Andere hängen vielleicht so lauge in der Warteschleife des Telefonservice, bis die profitableren Kunden abgearbeitet sind. So könnte eine praktische Umsetzung dessen aussehen, was SAP-Spreche-rin Schnaubelt abstrakt beschreibt: "In vielen Unternehmen wird Kundenbewertung mit der klassischen ABC-Analyse durchgeführt, bei der Kunden anhand von Daten wie dem Umsatz kategorisiert werden. A-Kunden als besonders wichtige Kunden werden anders betreut als C-Kunden." Noch näher am geplanten Einsatz von Data-Mining zur Terroristenjagd ist eine Anwendung, die heute viele Firmen erfolgreich nutzen: Sie spüren betrügende Mitarbeiter auf. Werner Sülzer vom großen CRM-Anbieter NCR Teradata beschreibt die Möglichkeiten so: "Heute hinterlässt praktisch jeder Täter - ob Mitarbeiter, Kunde oder Lieferant - Datenspuren bei seinen wirtschaftskriminellen Handlungen. Es muss vorrangig darum gehen, einzelne Spuren zu Handlungsmustern und Täterprofilen zu verdichten. Das gelingt mittels zentraler Datenlager und hoch entwickelter Such- und Analyseinstrumente." Von konkreten Erfolgen sprich: Entlas-sungen krimineller Mitarbeiter-nach Einsatz solcher Programme erzählen Unternehmen nicht gerne. Matthias Wilke von der "Beratungsstelle für Technologiefolgen und Qualifizierung" (BTQ) der Gewerkschaft Verdi weiß von einem Fall 'aus der Schweiz. Dort setzt die Handelskette "Pick Pay" das Programm "Lord Lose Prevention" ein. Zwei Monate nach Einfüh-rung seien Unterschlagungen im Wert von etwa 200 000 Franken ermittelt worden. Das kostete mehr als 50 verdächtige Kassiererinnen und Kassierer den Job.
    Year
    2002
  15. Benoit, G.: Data mining (2002) 0.01
    0.005970205 = product of:
      0.03582123 = sum of:
        0.03582123 = product of:
          0.07164246 = sum of:
            0.07164246 = weight(_text_:2002 in 4296) [ClassicSimilarity], result of:
              0.07164246 = score(doc=4296,freq=5.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                0.44929665 = fieldWeight in 4296, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4296)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Source
    Annual review of information science and technology. 36(2002), S.265-312
    Year
    2002
  16. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.005829348 = product of:
      0.017488044 = sum of:
        0.014128082 = product of:
          0.028256165 = sum of:
            0.028256165 = weight(_text_:2002 in 1789) [ClassicSimilarity], result of:
              0.028256165 = score(doc=1789,freq=7.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                0.17720498 = fieldWeight in 1789, product of:
                  2.6457512 = tf(freq=7.0), with freq of:
                    7.0 = termFreq=7.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.5 = coord(1/2)
        0.003359961 = product of:
          0.010079883 = sum of:
            0.010079883 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
              0.010079883 = score(doc=1789,freq=2.0), product of:
                0.13026431 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037198927 = queryNorm
                0.07738023 = fieldWeight in 1789, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1789)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Classification
    TK7882.I6I635 2002
    Date
    23. 3.2008 19:10:22
    LCC
    TK7882.I6I635 2002
    Year
    2002
  17. Budzik, J.; Hammond, K.J.; Birnbaum, L.: Information access in context (2001) 0.00
    0.0039555365 = product of:
      0.02373322 = sum of:
        0.02373322 = product of:
          0.071199656 = sum of:
            0.071199656 = weight(_text_:29 in 3835) [ClassicSimilarity], result of:
              0.071199656 = score(doc=3835,freq=2.0), product of:
                0.13085419 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037198927 = queryNorm
                0.5441145 = fieldWeight in 3835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3835)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 3.2002 17:31:17
  18. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.00
    0.003919955 = product of:
      0.023519728 = sum of:
        0.023519728 = product of:
          0.07055918 = sum of:
            0.07055918 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.07055918 = score(doc=4577,freq=2.0), product of:
                0.13026431 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037198927 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    2. 4.2000 18:01:22
  19. Classification, automation, and new media : Proceedings of the 24th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Passau, March 15 - 17, 2000 (2002) 0.00
    0.0038537504 = product of:
      0.023122502 = sum of:
        0.023122502 = product of:
          0.046245005 = sum of:
            0.046245005 = weight(_text_:2002 in 5997) [ClassicSimilarity], result of:
              0.046245005 = score(doc=5997,freq=3.0), product of:
                0.15945469 = queryWeight, product of:
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.037198927 = queryNorm
                0.29001972 = fieldWeight in 5997, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.28654 = idf(docFreq=1652, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5997)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Year
    2002
  20. Witten, I.H.; Frank, E.: Data Mining : Praktische Werkzeuge und Techniken für das maschinelle Lernen (2000) 0.00
    0.00339046 = product of:
      0.02034276 = sum of:
        0.02034276 = product of:
          0.06102828 = sum of:
            0.06102828 = weight(_text_:29 in 6833) [ClassicSimilarity], result of:
              0.06102828 = score(doc=6833,freq=2.0), product of:
                0.13085419 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037198927 = queryNorm
                0.46638384 = fieldWeight in 6833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6833)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    27. 1.1996 10:29:55

Years

Languages

  • e 26
  • d 15

Types