Search (57 results, page 3 of 3)

  • × theme_ss:"Data Mining"
  1. Frické, M.: Big data and its epistemology (2015) 0.00
    0.0047023837 = product of:
      0.009404767 = sum of:
        0.009404767 = product of:
          0.018809535 = sum of:
            0.018809535 = weight(_text_:m in 1811) [ClassicSimilarity], result of:
              0.018809535 = score(doc=1811,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.1649624 = fieldWeight in 1811, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1811)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  2. Bella, A. La; Fronzetti Colladon, A.; Battistoni, E.; Castellan, S.; Francucci, M.: Assessing perceived organizational leadership styles through twitter text mining (2018) 0.00
    0.0047023837 = product of:
      0.009404767 = sum of:
        0.009404767 = product of:
          0.018809535 = sum of:
            0.018809535 = weight(_text_:m in 2400) [ClassicSimilarity], result of:
              0.018809535 = score(doc=2400,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.1649624 = fieldWeight in 2400, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2400)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  3. Ebrahimi, M.; ShafieiBavani, E.; Wong, R.; Chen, F.: Twitter user geolocation by filtering of highly mentioned users (2018) 0.00
    0.0047023837 = product of:
      0.009404767 = sum of:
        0.009404767 = product of:
          0.018809535 = sum of:
            0.018809535 = weight(_text_:m in 4286) [ClassicSimilarity], result of:
              0.018809535 = score(doc=4286,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.1649624 = fieldWeight in 4286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4286)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  4. Lowe, D.B.; Dollinger, I.; Koster, T.; Herbert, B.E.: Text mining for type of research classification (2021) 0.00
    0.0047023837 = product of:
      0.009404767 = sum of:
        0.009404767 = product of:
          0.018809535 = sum of:
            0.018809535 = weight(_text_:m in 720) [ClassicSimilarity], result of:
              0.018809535 = score(doc=720,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.1649624 = fieldWeight in 720, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.046875 = fieldNorm(doc=720)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This project brought together undergraduate students in Computer Science with librarians to mine abstracts of articles from the Texas A&M University Libraries' institutional repository, OAKTrust, in order to probe the creation of new metadata to improve discovery and use. The mining operation task consisted simply of classifying the articles into two categories of research type: basic research ("for understanding," "curiosity-based," or "knowledge-based") and applied research ("use-based"). These categories are fundamental especially for funders but are also important to researchers. The mining-to-classification steps took several iterations, but ultimately, we achieved good results with the toolkit BERT (Bidirectional Encoder Representations from Transformers). The project and its workflows represent a preview of what may lie ahead in the future of crafting metadata using text mining techniques to enhance discoverability.
  5. Lischka, K.: Spurensuche im Datenwust : Data-Mining-Software fahndet nach kriminellen Mitarbeitern, guten Kunden - und bald vielleicht auch nach Terroristen (2002) 0.00
    0.004656083 = product of:
      0.009312166 = sum of:
        0.009312166 = product of:
          0.018624332 = sum of:
            0.018624332 = weight(_text_:22 in 1178) [ClassicSimilarity], result of:
              0.018624332 = score(doc=1178,freq=2.0), product of:
                0.16045728 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.045820985 = queryNorm
                0.116070345 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Ob man als Terrorist einen Anschlag gegen die Vereinigten Staaten plant, als Kassierer Scheine aus der Kasse unterschlägt oder für bestimmte Produkte besonders gerne Geld ausgibt - einen Unterschied macht Data-Mining-Software da nicht. Solche Programme analysieren riesige Daten- mengen und fällen statistische Urteile. Mit diesen Methoden wollen nun die For- scher des "Information Awaren in den Vereinigten Staaten Spuren von Terroristen in den Datenbanken von Behörden und privaten Unternehmen wie Kreditkartenfirmen finden. 200 Millionen Dollar umfasst der Jahresetat für die verschiedenen Forschungsprojekte. Dass solche Software in der Praxis funktioniert, zeigen die steigenden Umsätze der Anbieter so genannter Customer-Relationship-Management-Software. Im vergangenen Jahr ist das Potenzial für analytische CRM-Anwendungen laut dem Marktforschungsinstitut IDC weltweit um 22 Prozent gewachsen, bis zum Jahr 2006 soll es in Deutschland mit einem jährlichen Plus von 14,1 Prozent so weitergehen. Und das trotz schwacher Konjunktur - oder gerade deswegen. Denn ähnlich wie Data-Mining der USRegierung helfen soll, Terroristen zu finden, entscheiden CRM-Programme heute, welche Kunden für eine Firma profitabel sind. Und welche es künftig sein werden, wie Manuela Schnaubelt, Sprecherin des CRM-Anbieters SAP, beschreibt: "Die Kundenbewertung ist ein zentraler Bestandteil des analytischen CRM. Sie ermöglicht es Unternehmen, sich auf die für sie wichtigen und richtigen Kunden zu fokussieren. Darüber hinaus können Firmen mit speziellen Scoring- Verfahren ermitteln, welche Kunden langfristig in welchem Maße zum Unternehmenserfolg beitragen." Die Folgen der Bewertungen sind für die Betroffenen nicht immer positiv: Attraktive Kunden profitieren von individuellen Sonderangeboten und besonderer Zuwendung. Andere hängen vielleicht so lauge in der Warteschleife des Telefonservice, bis die profitableren Kunden abgearbeitet sind. So könnte eine praktische Umsetzung dessen aussehen, was SAP-Spreche-rin Schnaubelt abstrakt beschreibt: "In vielen Unternehmen wird Kundenbewertung mit der klassischen ABC-Analyse durchgeführt, bei der Kunden anhand von Daten wie dem Umsatz kategorisiert werden. A-Kunden als besonders wichtige Kunden werden anders betreut als C-Kunden." Noch näher am geplanten Einsatz von Data-Mining zur Terroristenjagd ist eine Anwendung, die heute viele Firmen erfolgreich nutzen: Sie spüren betrügende Mitarbeiter auf. Werner Sülzer vom großen CRM-Anbieter NCR Teradata beschreibt die Möglichkeiten so: "Heute hinterlässt praktisch jeder Täter - ob Mitarbeiter, Kunde oder Lieferant - Datenspuren bei seinen wirtschaftskriminellen Handlungen. Es muss vorrangig darum gehen, einzelne Spuren zu Handlungsmustern und Täterprofilen zu verdichten. Das gelingt mittels zentraler Datenlager und hoch entwickelter Such- und Analyseinstrumente." Von konkreten Erfolgen sprich: Entlas-sungen krimineller Mitarbeiter-nach Einsatz solcher Programme erzählen Unternehmen nicht gerne. Matthias Wilke von der "Beratungsstelle für Technologiefolgen und Qualifizierung" (BTQ) der Gewerkschaft Verdi weiß von einem Fall 'aus der Schweiz. Dort setzt die Handelskette "Pick Pay" das Programm "Lord Lose Prevention" ein. Zwei Monate nach Einfüh-rung seien Unterschlagungen im Wert von etwa 200 000 Franken ermittelt worden. Das kostete mehr als 50 verdächtige Kassiererinnen und Kassierer den Job.
  6. Kantardzic, M.: Data mining : concepts, models, methods, and algorithms (2003) 0.00
    0.00443345 = product of:
      0.0088669 = sum of:
        0.0088669 = product of:
          0.0177338 = sum of:
            0.0177338 = weight(_text_:m in 2291) [ClassicSimilarity], result of:
              0.0177338 = score(doc=2291,freq=4.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.15552804 = fieldWeight in 2291, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2291)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    m
  7. Classification, automation, and new media : Proceedings of the 24th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Passau, March 15 - 17, 2000 (2002) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 5997) [ClassicSimilarity], result of:
              0.015674612 = score(doc=5997,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 5997, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5997)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    m
  8. Survey of text mining : clustering, classification, and retrieval (2004) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 804) [ClassicSimilarity], result of:
              0.015674612 = score(doc=804,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=804)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    m
  9. Liu, Y.; Zhang, M.; Cen, R.; Ru, L.; Ma, S.: Data cleansing for Web information retrieval using query independent features (2007) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 607) [ClassicSimilarity], result of:
              0.015674612 = score(doc=607,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=607)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. O'Brien, H.L.; Lebow, M.: Mixed-methods approach to measuring user experience in online news interactions (2013) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 1001) [ClassicSimilarity], result of:
              0.015674612 = score(doc=1001,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 1001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1001)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  11. Ekbia, H.; Mattioli, M.; Kouper, I.; Arave, G.; Ghazinejad, A.; Bowman, T.; Suri, V.R.; Tsou, A.; Weingart, S.; Sugimoto, C.R.: Big data, bigger dilemmas : a critical review (2015) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 2155) [ClassicSimilarity], result of:
              0.015674612 = score(doc=2155,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 2155, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2155)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Gill, A.J.; Hinrichs-Krapels, S.; Blanke, T.; Grant, J.; Hedges, M.; Tanner, S.: Insight workflow : systematically combining human and computational methods to explore textual data (2017) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 3682) [ClassicSimilarity], result of:
              0.015674612 = score(doc=3682,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 3682, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3682)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Goldberg, D.M.; Zaman, N.; Brahma, A.; Aloiso, M.: Are mortgage loan closing delay risks predictable? : A predictive analysis using text mining on discussion threads (2022) 0.00
    0.003918653 = product of:
      0.007837306 = sum of:
        0.007837306 = product of:
          0.015674612 = sum of:
            0.015674612 = weight(_text_:m in 501) [ClassicSimilarity], result of:
              0.015674612 = score(doc=501,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.13746867 = fieldWeight in 501, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=501)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Chakrabarti, S.: Mining the Web : discovering knowledge from hypertext data (2003) 0.00
    0.0031349224 = product of:
      0.0062698447 = sum of:
        0.0062698447 = product of:
          0.012539689 = sum of:
            0.012539689 = weight(_text_:m in 2222) [ClassicSimilarity], result of:
              0.012539689 = score(doc=2222,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.10997493 = fieldWeight in 2222, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2222)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    m
  15. Heyer, G.; Quasthoff, U.; Wittig, T.: Text Mining : Wissensrohstoff Text. Konzepte, Algorithmen, Ergebnisse (2006) 0.00
    0.0031349224 = product of:
      0.0062698447 = sum of:
        0.0062698447 = product of:
          0.012539689 = sum of:
            0.012539689 = weight(_text_:m in 5218) [ClassicSimilarity], result of:
              0.012539689 = score(doc=5218,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.10997493 = fieldWeight in 5218, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5218)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    m
  16. Liu, B.: Web data mining : exploring hyperlinks, contents, and usage data (2011) 0.00
    0.0031349224 = product of:
      0.0062698447 = sum of:
        0.0062698447 = product of:
          0.012539689 = sum of:
            0.012539689 = weight(_text_:m in 354) [ClassicSimilarity], result of:
              0.012539689 = score(doc=354,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.10997493 = fieldWeight in 354, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.03125 = fieldNorm(doc=354)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    m
  17. Mining text data (2012) 0.00
    0.0031349224 = product of:
      0.0062698447 = sum of:
        0.0062698447 = product of:
          0.012539689 = sum of:
            0.012539689 = weight(_text_:m in 362) [ClassicSimilarity], result of:
              0.012539689 = score(doc=362,freq=2.0), product of:
                0.114023164 = queryWeight, product of:
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.045820985 = queryNorm
                0.10997493 = fieldWeight in 362, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4884486 = idf(docFreq=9980, maxDocs=44218)
                  0.03125 = fieldNorm(doc=362)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    m

Years

Languages

  • e 43
  • d 14

Types