Search (43 results, page 2 of 3)

  • × theme_ss:"Data Mining"
  1. Winterhalter, C.: Licence to mine : ein Überblick über Rahmenbedingungen von Text and Data Mining und den aktuellen Stand der Diskussion (2016) 0.01
    0.0062498376 = product of:
      0.012499675 = sum of:
        0.012499675 = product of:
          0.02499935 = sum of:
            0.02499935 = weight(_text_:h in 673) [ClassicSimilarity], result of:
              0.02499935 = score(doc=673,freq=2.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.21959636 = fieldWeight in 673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=673)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    027.7 Zeitschrift für Bibliothekskultur. 4(2016), H.2
  2. Peters, G.; Gaese, V.: ¬Das DocCat-System in der Textdokumentation von G+J (2003) 0.01
    0.0062082405 = product of:
      0.012416481 = sum of:
        0.012416481 = product of:
          0.024832962 = sum of:
            0.024832962 = weight(_text_:22 in 1507) [ClassicSimilarity], result of:
              0.024832962 = score(doc=1507,freq=2.0), product of:
                0.16046064 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045821942 = queryNorm
                0.15476047 = fieldWeight in 1507, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1507)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 4.2003 11:45:36
  3. Hölzig, C.: Google spürt Grippewellen auf : Die neue Anwendung ist bisher auf die USA beschränkt (2008) 0.01
    0.0062082405 = product of:
      0.012416481 = sum of:
        0.012416481 = product of:
          0.024832962 = sum of:
            0.024832962 = weight(_text_:22 in 2403) [ClassicSimilarity], result of:
              0.024832962 = score(doc=2403,freq=2.0), product of:
                0.16046064 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045821942 = queryNorm
                0.15476047 = fieldWeight in 2403, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2403)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    3. 5.1997 8:44:22
  4. Jäger, L.: Von Big Data zu Big Brother (2018) 0.01
    0.0062082405 = product of:
      0.012416481 = sum of:
        0.012416481 = product of:
          0.024832962 = sum of:
            0.024832962 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
              0.024832962 = score(doc=5234,freq=2.0), product of:
                0.16046064 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045821942 = queryNorm
                0.15476047 = fieldWeight in 5234, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5234)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2018 11:33:49
  5. Ku, L.-W.; Chen, H.-H.: Mining opinions from the Web : beyond relevance retrieval (2007) 0.01
    0.0055241287 = product of:
      0.011048257 = sum of:
        0.011048257 = product of:
          0.022096515 = sum of:
            0.022096515 = weight(_text_:h in 605) [ClassicSimilarity], result of:
              0.022096515 = score(doc=605,freq=4.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.1940976 = fieldWeight in 605, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=605)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Tu, Y.-N.; Hsu, S.-L.: Constructing conceptual trajectory maps to trace the development of research fields (2016) 0.01
    0.0055241287 = product of:
      0.011048257 = sum of:
        0.011048257 = product of:
          0.022096515 = sum of:
            0.022096515 = weight(_text_:h in 3059) [ClassicSimilarity], result of:
              0.022096515 = score(doc=3059,freq=4.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.1940976 = fieldWeight in 3059, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study proposes a new method to construct and trace the trajectory of conceptual development of a research field by combining main path analysis, citation analysis, and text-mining techniques. Main path analysis, a method used commonly to trace the most critical path in a citation network, helps describe the developmental trajectory of a research field. This study extends the main path analysis method and applies text-mining techniques in the new method, which reflects the trajectory of conceptual development in an academic research field more accurately than citation frequency, which represents only the articles examined. Articles can be merged based on similarity of concepts, and by merging concepts the history of a research field can be described more precisely. The new method was applied to the "h-index" and "text mining" fields. The precision, recall, and F-measures of the h-index were 0.738, 0.652, and 0.658 and those of text-mining were 0.501, 0.653, and 0.551, respectively. Last, this study not only establishes the conceptual trajectory map of a research field, but also recommends keywords that are more precise than those used currently by researchers. These precise keywords could enable researchers to gather related works more quickly than before.
  7. Raghavan, V.V.; Deogun, J.S.; Sever, H.: Knowledge discovery and data mining : introduction (1998) 0.01
    0.005468608 = product of:
      0.010937216 = sum of:
        0.010937216 = product of:
          0.021874432 = sum of:
            0.021874432 = weight(_text_:h in 2899) [ClassicSimilarity], result of:
              0.021874432 = score(doc=2899,freq=2.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.19214681 = fieldWeight in 2899, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2899)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  8. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.00
    0.004687378 = product of:
      0.009374756 = sum of:
        0.009374756 = product of:
          0.018749513 = sum of:
            0.018749513 = weight(_text_:h in 4242) [ClassicSimilarity], result of:
              0.018749513 = score(doc=4242,freq=2.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.16469726 = fieldWeight in 4242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4242)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  9. Kulathuramaiyer, N.; Maurer, H.: Implications of emerging data mining (2009) 0.00
    0.004687378 = product of:
      0.009374756 = sum of:
        0.009374756 = product of:
          0.018749513 = sum of:
            0.018749513 = weight(_text_:h in 3144) [ClassicSimilarity], result of:
              0.018749513 = score(doc=3144,freq=2.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.16469726 = fieldWeight in 3144, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3144)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. Chen, Y.-L.; Liu, Y.-H.; Ho, W.-L.: ¬A text mining approach to assist the general public in the retrieval of legal documents (2013) 0.00
    0.004687378 = product of:
      0.009374756 = sum of:
        0.009374756 = product of:
          0.018749513 = sum of:
            0.018749513 = weight(_text_:h in 521) [ClassicSimilarity], result of:
              0.018749513 = score(doc=521,freq=2.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.16469726 = fieldWeight in 521, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=521)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Sun, X.; Lin, H.: Topical community detection from mining user tagging behavior and interest (2013) 0.00
    0.004687378 = product of:
      0.009374756 = sum of:
        0.009374756 = product of:
          0.018749513 = sum of:
            0.018749513 = weight(_text_:h in 605) [ClassicSimilarity], result of:
              0.018749513 = score(doc=605,freq=2.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.16469726 = fieldWeight in 605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=605)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Teich, E.; Degaetano-Ortlieb, S.; Fankhauser, P.; Kermes, H.; Lapshinova-Koltunski, E.: ¬The linguistic construal of disciplinarity : a data-mining approach using register features (2016) 0.00
    0.004687378 = product of:
      0.009374756 = sum of:
        0.009374756 = product of:
          0.018749513 = sum of:
            0.018749513 = weight(_text_:h in 3015) [ClassicSimilarity], result of:
              0.018749513 = score(doc=3015,freq=2.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.16469726 = fieldWeight in 3015, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3015)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Kraker, P.; Kittel, C,; Enkhbayar, A.: Open Knowledge Maps : creating a visual interface to the world's scientific knowledge based on natural language processing (2016) 0.00
    0.004687378 = product of:
      0.009374756 = sum of:
        0.009374756 = product of:
          0.018749513 = sum of:
            0.018749513 = weight(_text_:h in 3205) [ClassicSimilarity], result of:
              0.018749513 = score(doc=3205,freq=2.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.16469726 = fieldWeight in 3205, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3205)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    027.7 Zeitschrift für Bibliothekskultur. 4(2016), H.2
  14. Drees, B.: Text und data mining : Herausforderungen und Möglichkeiten für Bibliotheken (2016) 0.00
    0.004687378 = product of:
      0.009374756 = sum of:
        0.009374756 = product of:
          0.018749513 = sum of:
            0.018749513 = weight(_text_:h in 3952) [ClassicSimilarity], result of:
              0.018749513 = score(doc=3952,freq=2.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.16469726 = fieldWeight in 3952, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3952)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Perspektive Bibliothek. 5(2016) H.1, S.49-73
  15. Lischka, K.: Spurensuche im Datenwust : Data-Mining-Software fahndet nach kriminellen Mitarbeitern, guten Kunden - und bald vielleicht auch nach Terroristen (2002) 0.00
    0.0046561803 = product of:
      0.009312361 = sum of:
        0.009312361 = product of:
          0.018624721 = sum of:
            0.018624721 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
              0.018624721 = score(doc=1178,freq=2.0), product of:
                0.16046064 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045821942 = queryNorm
                0.116070345 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Ob man als Terrorist einen Anschlag gegen die Vereinigten Staaten plant, als Kassierer Scheine aus der Kasse unterschlägt oder für bestimmte Produkte besonders gerne Geld ausgibt - einen Unterschied macht Data-Mining-Software da nicht. Solche Programme analysieren riesige Daten- mengen und fällen statistische Urteile. Mit diesen Methoden wollen nun die For- scher des "Information Awaren in den Vereinigten Staaten Spuren von Terroristen in den Datenbanken von Behörden und privaten Unternehmen wie Kreditkartenfirmen finden. 200 Millionen Dollar umfasst der Jahresetat für die verschiedenen Forschungsprojekte. Dass solche Software in der Praxis funktioniert, zeigen die steigenden Umsätze der Anbieter so genannter Customer-Relationship-Management-Software. Im vergangenen Jahr ist das Potenzial für analytische CRM-Anwendungen laut dem Marktforschungsinstitut IDC weltweit um 22 Prozent gewachsen, bis zum Jahr 2006 soll es in Deutschland mit einem jährlichen Plus von 14,1 Prozent so weitergehen. Und das trotz schwacher Konjunktur - oder gerade deswegen. Denn ähnlich wie Data-Mining der USRegierung helfen soll, Terroristen zu finden, entscheiden CRM-Programme heute, welche Kunden für eine Firma profitabel sind. Und welche es künftig sein werden, wie Manuela Schnaubelt, Sprecherin des CRM-Anbieters SAP, beschreibt: "Die Kundenbewertung ist ein zentraler Bestandteil des analytischen CRM. Sie ermöglicht es Unternehmen, sich auf die für sie wichtigen und richtigen Kunden zu fokussieren. Darüber hinaus können Firmen mit speziellen Scoring- Verfahren ermitteln, welche Kunden langfristig in welchem Maße zum Unternehmenserfolg beitragen." Die Folgen der Bewertungen sind für die Betroffenen nicht immer positiv: Attraktive Kunden profitieren von individuellen Sonderangeboten und besonderer Zuwendung. Andere hängen vielleicht so lauge in der Warteschleife des Telefonservice, bis die profitableren Kunden abgearbeitet sind. So könnte eine praktische Umsetzung dessen aussehen, was SAP-Spreche-rin Schnaubelt abstrakt beschreibt: "In vielen Unternehmen wird Kundenbewertung mit der klassischen ABC-Analyse durchgeführt, bei der Kunden anhand von Daten wie dem Umsatz kategorisiert werden. A-Kunden als besonders wichtige Kunden werden anders betreut als C-Kunden." Noch näher am geplanten Einsatz von Data-Mining zur Terroristenjagd ist eine Anwendung, die heute viele Firmen erfolgreich nutzen: Sie spüren betrügende Mitarbeiter auf. Werner Sülzer vom großen CRM-Anbieter NCR Teradata beschreibt die Möglichkeiten so: "Heute hinterlässt praktisch jeder Täter - ob Mitarbeiter, Kunde oder Lieferant - Datenspuren bei seinen wirtschaftskriminellen Handlungen. Es muss vorrangig darum gehen, einzelne Spuren zu Handlungsmustern und Täterprofilen zu verdichten. Das gelingt mittels zentraler Datenlager und hoch entwickelter Such- und Analyseinstrumente." Von konkreten Erfolgen sprich: Entlas-sungen krimineller Mitarbeiter-nach Einsatz solcher Programme erzählen Unternehmen nicht gerne. Matthias Wilke von der "Beratungsstelle für Technologiefolgen und Qualifizierung" (BTQ) der Gewerkschaft Verdi weiß von einem Fall 'aus der Schweiz. Dort setzt die Handelskette "Pick Pay" das Programm "Lord Lose Prevention" ein. Zwei Monate nach Einfüh-rung seien Unterschlagungen im Wert von etwa 200 000 Franken ermittelt worden. Das kostete mehr als 50 verdächtige Kassiererinnen und Kassierer den Job.
  16. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.00
    0.0046561803 = product of:
      0.009312361 = sum of:
        0.009312361 = product of:
          0.018624721 = sum of:
            0.018624721 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
              0.018624721 = score(doc=1833,freq=2.0), product of:
                0.16046064 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045821942 = queryNorm
                0.116070345 = fieldWeight in 1833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1833)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    11. 5.2008 19:49:22
  17. Suakkaphong, N.; Zhang, Z.; Chen, H.: Disease named entity recognition using semisupervised learning and conditional random fields (2011) 0.00
    0.0039061487 = product of:
      0.0078122974 = sum of:
        0.0078122974 = product of:
          0.015624595 = sum of:
            0.015624595 = weight(_text_:h in 4367) [ClassicSimilarity], result of:
              0.015624595 = score(doc=4367,freq=2.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.13724773 = fieldWeight in 4367, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4367)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Wei, C.-P.; Lee, Y.-H.; Chiang, Y.-S.; Chen, C.-T.; Yang, C.C.C.: Exploiting temporal characteristics of features for effectively discovering event episodes from news corpora (2014) 0.00
    0.0039061487 = product of:
      0.0078122974 = sum of:
        0.0078122974 = product of:
          0.015624595 = sum of:
            0.015624595 = weight(_text_:h in 1225) [ClassicSimilarity], result of:
              0.015624595 = score(doc=1225,freq=2.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.13724773 = fieldWeight in 1225, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1225)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Zhang, Z.; Li, Q.; Zeng, D.; Ga, H.: Extracting evolutionary communities in community question answering (2014) 0.00
    0.0039061487 = product of:
      0.0078122974 = sum of:
        0.0078122974 = product of:
          0.015624595 = sum of:
            0.015624595 = weight(_text_:h in 1286) [ClassicSimilarity], result of:
              0.015624595 = score(doc=1286,freq=2.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.13724773 = fieldWeight in 1286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1286)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  20. Ekbia, H.; Mattioli, M.; Kouper, I.; Arave, G.; Ghazinejad, A.; Bowman, T.; Suri, V.R.; Tsou, A.; Weingart, S.; Sugimoto, C.R.: Big data, bigger dilemmas : a critical review (2015) 0.00
    0.0039061487 = product of:
      0.0078122974 = sum of:
        0.0078122974 = product of:
          0.015624595 = sum of:
            0.015624595 = weight(_text_:h in 2155) [ClassicSimilarity], result of:
              0.015624595 = score(doc=2155,freq=2.0), product of:
                0.113842286 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.045821942 = queryNorm
                0.13724773 = fieldWeight in 2155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2155)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    

Languages

  • e 23
  • d 20

Types