Search (29 results, page 1 of 2)

  • × theme_ss:"Data Mining"
  1. Jäger, L.: Von Big Data zu Big Brother (2018) 0.06
    0.06006154 = product of:
      0.12012308 = sum of:
        0.12012308 = sum of:
          0.08700665 = weight(_text_:wissen in 5234) [ClassicSimilarity], result of:
            0.08700665 = score(doc=5234,freq=6.0), product of:
              0.26354674 = queryWeight, product of:
                4.3128977 = idf(docFreq=1609, maxDocs=44218)
                0.06110665 = queryNorm
              0.33013746 = fieldWeight in 5234, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.3128977 = idf(docFreq=1609, maxDocs=44218)
                0.03125 = fieldNorm(doc=5234)
          0.03311643 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
            0.03311643 = score(doc=5234,freq=2.0), product of:
              0.21398507 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06110665 = queryNorm
              0.15476047 = fieldWeight in 5234, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=5234)
      0.5 = coord(1/2)
    
    Abstract
    1983 bewegte ein einziges Thema die gesamte Bundesrepublik: die geplante Volkszählung. Jeder Haushalt in Westdeutschland sollte Fragebögen mit 36 Fragen zur Wohnsituation, den im Haushalt lebenden Personen und über ihre Einkommensverhältnisse ausfüllen. Es regte sich massiver Widerstand, hunderte Bürgerinitiativen formierten sich im ganzen Land gegen die Befragung. Man wollte nicht "erfasst" werden, die Privatsphäre war heilig. Es bestand die (berechtigte) Sorge, dass die Antworten auf den eigentlich anonymisierten Fragebögen Rückschlüsse auf die Identität der Befragten zulassen. Das Bundesverfassungsgericht gab den Klägern gegen den Zensus Recht: Die geplante Volkszählung verstieß gegen den Datenschutz und damit auch gegen das Grundgesetz. Sie wurde gestoppt. Nur eine Generation später geben wir sorglos jedes Mal beim Einkaufen die Bonuskarte der Supermarktkette heraus, um ein paar Punkte für ein Geschenk oder Rabatte beim nächsten Einkauf zu sammeln. Und dabei wissen wir sehr wohl, dass der Supermarkt damit unser Konsumverhalten bis ins letzte Detail erfährt. Was wir nicht wissen, ist, wer noch Zugang zu diesen Daten erhält. Deren Käufer bekommen nicht nur Zugriff auf unsere Einkäufe, sondern können über sie auch unsere Gewohnheiten, persönlichen Vorlieben und Einkommen ermitteln. Genauso unbeschwert surfen wir im Internet, googeln und shoppen, mailen und chatten. Google, Facebook und Microsoft schauen bei all dem nicht nur zu, sondern speichern auf alle Zeiten alles, was wir von uns geben, was wir einkaufen, was wir suchen, und verwenden es für ihre eigenen Zwecke. Sie durchstöbern unsere E-Mails, kennen unser persönliches Zeitmanagement, verfolgen unseren momentanen Standort, wissen um unsere politischen, religiösen und sexuellen Präferenzen (wer kennt ihn nicht, den Button "an Männern interessiert" oder "an Frauen interessiert"?), unsere engsten Freunde, mit denen wir online verbunden sind, unseren Beziehungsstatus, welche Schule wir besuchen oder besucht haben und vieles mehr.
    Date
    22. 1.2018 11:33:49
  2. Borgelt, C.; Kruse, R.: Unsicheres Wissen nutzen (2002) 0.04
    0.044400394 = product of:
      0.08880079 = sum of:
        0.08880079 = product of:
          0.17760158 = sum of:
            0.17760158 = weight(_text_:wissen in 1104) [ClassicSimilarity], result of:
              0.17760158 = score(doc=1104,freq=4.0), product of:
                0.26354674 = queryWeight, product of:
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.06110665 = queryNorm
                0.67389023 = fieldWeight in 1104, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1104)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Probabilistische Schlussfolgerungsnetze sind ein probates Mittel, unsicheres Wissen sauber und mathematisch fundiert zu verarbeiten. In neuerer Zeit wurden Verfahren entwickelt, um sie automatisch aus Beispieldaten zu erlernen
  3. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.03
    0.03125615 = product of:
      0.0625123 = sum of:
        0.0625123 = sum of:
          0.037674982 = weight(_text_:wissen in 1833) [ClassicSimilarity], result of:
            0.037674982 = score(doc=1833,freq=2.0), product of:
              0.26354674 = queryWeight, product of:
                4.3128977 = idf(docFreq=1609, maxDocs=44218)
                0.06110665 = queryNorm
              0.14295371 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.3128977 = idf(docFreq=1609, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
          0.02483732 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
            0.02483732 = score(doc=1833,freq=2.0), product of:
              0.21398507 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.06110665 = queryNorm
              0.116070345 = fieldWeight in 1833, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=1833)
      0.5 = coord(1/2)
    
    Content
    Enthält u.a. die Beiträge (Dokumentarische Aspekte): Günter Perers/Volker Gaese: Das DocCat-System in der Textdokumentation von Gr+J (Weimar 2000) Thomas Gerick: Finden statt suchen. Knowledge Retrieval in Wissensbanken. Mit organisiertem Wissen zu mehr Erfolg (Weimar 2000) Winfried Gödert: Aufbereitung und Rezeption von Information (Weimar 2000) Elisabeth Damen: Klassifikation als Ordnungssystem im elektronischen Pressearchiv (Köln 2001) Clemens Schlenkrich: Aspekte neuer Regelwerksarbeit - Multimediales Datenmodell für ARD und ZDF (Köln 2001) Josef Wandeler: Comprenez-vous only Bahnhof'? - Mehrsprachigkeit in der Mediendokumentation (Köln 200 1)
    Date
    11. 5.2008 19:49:22
  4. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.03
    0.028976874 = product of:
      0.05795375 = sum of:
        0.05795375 = product of:
          0.1159075 = sum of:
            0.1159075 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.1159075 = score(doc=4577,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    2. 4.2000 18:01:22
  5. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.03
    0.026972838 = product of:
      0.053945675 = sum of:
        0.053945675 = product of:
          0.16183703 = sum of:
            0.16183703 = weight(_text_:objects in 3884) [ClassicSimilarity], result of:
              0.16183703 = score(doc=3884,freq=4.0), product of:
                0.3247862 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.06110665 = queryNorm
                0.49828792 = fieldWeight in 3884, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3884)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
  6. Fayyad, U.M.; Djorgovski, S.G.; Weir, N.: From digitized images to online catalogs : data ming a sky server (1996) 0.03
    0.025430236 = product of:
      0.050860472 = sum of:
        0.050860472 = product of:
          0.15258141 = sum of:
            0.15258141 = weight(_text_:objects in 6625) [ClassicSimilarity], result of:
              0.15258141 = score(doc=6625,freq=2.0), product of:
                0.3247862 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.06110665 = queryNorm
                0.46979034 = fieldWeight in 6625, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6625)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Offers a data mining approach based on machine learning classification methods to the problem of automated cataloguing of online databases of digital images resulting from sky surveys. The SKICAT system automates the reduction and analysis of 3 terabytes of images expected to contain about 2 billion sky objects. It offers a solution to problems associated with the analysis of large data sets in science
  7. Tiefschürfen in Datenbanken (2002) 0.03
    0.025116654 = product of:
      0.05023331 = sum of:
        0.05023331 = product of:
          0.10046662 = sum of:
            0.10046662 = weight(_text_:wissen in 996) [ClassicSimilarity], result of:
              0.10046662 = score(doc=996,freq=2.0), product of:
                0.26354674 = queryWeight, product of:
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.06110665 = queryNorm
                0.38120988 = fieldWeight in 996, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.0625 = fieldNorm(doc=996)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Enthält die Beiträge: Kruse, R., C. Borgelt: Suche im Datendschungel - Borgelt, C. u. R. Kruse: Unsicheres Wissen nutzen - Wrobel, S.: Lern- und Entdeckungsverfahren - Keim, D.A.: Data Mining mit bloßem Auge
  8. Heyer, G.; Quasthoff, U.; Wittig, T.: Text Mining : Wissensrohstoff Text. Konzepte, Algorithmen, Ergebnisse (2006) 0.03
    0.025116654 = product of:
      0.05023331 = sum of:
        0.05023331 = product of:
          0.10046662 = sum of:
            0.10046662 = weight(_text_:wissen in 5218) [ClassicSimilarity], result of:
              0.10046662 = score(doc=5218,freq=8.0), product of:
                0.26354674 = queryWeight, product of:
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.06110665 = queryNorm
                0.38120988 = fieldWeight in 5218, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5218)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das Forschungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Das erste deutsche Lehrbuch zu einer bahnbrechenden Technologie: Text Mining: Wissensrohstoff Text Konzepte, Algorithmen, Ergebnisse Ein großer Teil des Weltwissens befindet sich in Form digitaler Texte im Internet oder in Intranets. Heutige Suchmaschinen nutzen diesen Wissensrohstoff nur rudimentär: Sie können semantische Zusammen-hänge nur bedingt erkennen. Alle warten auf das semantische Web, in dem die Ersteller von Text selbst die Semantik einfügen. Das wird aber noch lange dauern. Es gibt jedoch eine Technologie, die es bereits heute ermöglicht semantische Zusammenhänge in Rohtexten zu analysieren und aufzubereiten. Das For-schungsgebiet "Text Mining" ermöglicht es mit Hilfe statistischer und musterbasierter Verfahren, Wissen aus Texten zu extrahieren, zu verarbeiten und zu nutzen. Hier wird die Basis für die Suchmaschinen der Zukunft gelegt. Was fällt Ihnen bei dem Wort "Stich" ein? Die einen denken an Tennis, die anderen an Skat. Die verschiedenen Zusammenhänge können durch Text Mining automatisch ermittelt und in Form von Wortnetzen dargestellt werden. Welche Begriffe stehen am häufigsten links und rechts vom Wort "Festplatte"? Welche Wortformen und Eigennamen treten seit 2001 neu in der deutschen Sprache auf? Text Mining beantwortet diese und viele weitere Fragen. Tauchen Sie mit diesem Lehrbuch ein in eine neue, faszinierende Wissenschaftsdisziplin und entdecken Sie neue, bisher unbekannte Zusammenhänge und Sichtweisen. Sehen Sie, wie aus dem Wissensrohstoff Text Wissen wird! Dieses Lehrbuch richtet sich sowohl an Studierende als auch an Praktiker mit einem fachlichen Schwerpunkt in der Informatik, Wirtschaftsinformatik und/oder Linguistik, die sich über die Grundlagen, Verfahren und Anwendungen des Text Mining informieren möchten und Anregungen für die Implementierung eigener Anwendungen suchen. Es basiert auf Arbeiten, die während der letzten Jahre an der Abteilung Automatische Sprachverarbeitung am Institut für Informatik der Universität Leipzig unter Leitung von Prof. Dr. Heyer entstanden sind. Eine Fülle praktischer Beispiele von Text Mining-Konzepten und -Algorithmen verhelfen dem Leser zu einem umfassenden, aber auch detaillierten Verständnis der Grundlagen und Anwendungen des Text Mining. Folgende Themen werden behandelt: Wissen und Text Grundlagen der Bedeutungsanalyse Textdatenbanken Sprachstatistik Clustering Musteranalyse Hybride Verfahren Beispielanwendungen Anhänge: Statistik und linguistische Grundlagen 360 Seiten, 54 Abb., 58 Tabellen und 95 Glossarbegriffe Mit kostenlosen e-learning-Kurs "Schnelleinstieg: Sprachstatistik" Zusätzlich zum Buch gibt es in Kürze einen Online-Zertifikats-Kurs mit Mentor- und Tutorunterstützung.
  9. Bauckhage, C.: Moderne Textanalyse : neues Wissen für intelligente Lösungen (2016) 0.03
    0.025116654 = product of:
      0.05023331 = sum of:
        0.05023331 = product of:
          0.10046662 = sum of:
            0.10046662 = weight(_text_:wissen in 2568) [ClassicSimilarity], result of:
              0.10046662 = score(doc=2568,freq=2.0), product of:
                0.26354674 = queryWeight, product of:
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.06110665 = queryNorm
                0.38120988 = fieldWeight in 2568, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2568)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. KDD : techniques and applications (1998) 0.02
    0.02483732 = product of:
      0.04967464 = sum of:
        0.04967464 = product of:
          0.09934928 = sum of:
            0.09934928 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
              0.09934928 = score(doc=6783,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.46428138 = fieldWeight in 6783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6783)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  11. Maaten, L. van den: Accelerating t-SNE using Tree-Based Algorithms (2014) 0.02
    0.022251455 = product of:
      0.04450291 = sum of:
        0.04450291 = product of:
          0.13350873 = sum of:
            0.13350873 = weight(_text_:objects in 3886) [ClassicSimilarity], result of:
              0.13350873 = score(doc=3886,freq=2.0), product of:
                0.3247862 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.06110665 = queryNorm
                0.41106653 = fieldWeight in 3886, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3886)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The paper investigates the acceleration of t-SNE-an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots-using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N*logN). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant.
  12. Huvila, I.: Mining qualitative data on human information behaviour from the Web (2010) 0.02
    0.021977073 = product of:
      0.043954145 = sum of:
        0.043954145 = product of:
          0.08790829 = sum of:
            0.08790829 = weight(_text_:wissen in 4676) [ClassicSimilarity], result of:
              0.08790829 = score(doc=4676,freq=2.0), product of:
                0.26354674 = queryWeight, product of:
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.06110665 = queryNorm
                0.33355865 = fieldWeight in 4676, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4676)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information und Wissen: global, sozial und frei? Proceedings des 12. Internationalen Symposiums für Informationswissenschaft (ISI 2011) ; Hildesheim, 9. - 11. März 2011. Hrsg.: J. Griesbaum, T. Mandl u. C. Womser-Hacker
  13. Loh, S.; Oliveira, J.P.M. de; Gastal, F.L.: Knowledge discovery in textual documentation : qualitative and quantitative analyses (2001) 0.02
    0.019072678 = product of:
      0.038145356 = sum of:
        0.038145356 = product of:
          0.11443606 = sum of:
            0.11443606 = weight(_text_:objects in 4482) [ClassicSimilarity], result of:
              0.11443606 = score(doc=4482,freq=2.0), product of:
                0.3247862 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.06110665 = queryNorm
                0.35234275 = fieldWeight in 4482, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4482)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents an approach for performing knowledge discovery in texts through qualitative and quantitative analyses of high-level textual characteristics. Instead of applying mining techniques on attribute values, terms or keywords extracted from texts, the discovery process works over conceptss identified in texts. Concepts represent real world events and objects, and they help the user to understand ideas, trends, thoughts, opinions and intentions present in texts. The approach combines a quasi-automatic categorisation task (for qualitative analysis) with a mining process (for quantitative analysis). The goal is to find new and useful knowledge inside a textual collection through the use of mining techniques applied over concepts (representing text content). In this paper, an application of the approach to medical records of a psychiatric hospital is presented. The approach helps physicians to extract knowledge about patients and diseases. This knowledge may be used for epidemiological studies, for training professionals and it may be also used to support physicians to diagnose and evaluate diseases.
  14. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.02
    0.016558215 = product of:
      0.03311643 = sum of:
        0.03311643 = product of:
          0.06623286 = sum of:
            0.06623286 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.06623286 = score(doc=1737,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22.11.1998 18:57:22
  15. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.02
    0.016558215 = product of:
      0.03311643 = sum of:
        0.03311643 = product of:
          0.06623286 = sum of:
            0.06623286 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
              0.06623286 = score(doc=4261,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.30952093 = fieldWeight in 4261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4261)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    17. 7.2002 19:22:06
  16. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.02
    0.016558215 = product of:
      0.03311643 = sum of:
        0.03311643 = product of:
          0.06623286 = sum of:
            0.06623286 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.06623286 = score(doc=1270,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information systems. 22(1997) nos.5/6, S.333-347
  17. Ku, L.-W.; Chen, H.-H.: Mining opinions from the Web : beyond relevance retrieval (2007) 0.02
    0.015893899 = product of:
      0.031787798 = sum of:
        0.031787798 = product of:
          0.095363386 = sum of:
            0.095363386 = weight(_text_:objects in 605) [ClassicSimilarity], result of:
              0.095363386 = score(doc=605,freq=2.0), product of:
                0.3247862 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.06110665 = queryNorm
                0.29361898 = fieldWeight in 605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=605)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Documents discussing public affairs, common themes, interesting products, and so on, are reported and distributed on the Web. Positive and negative opinions embedded in documents are useful references and feedbacks for governments to improve their services, for companies to market their products, and for customers to purchase their objects. Web opinion mining aims to extract, summarize, and track various aspects of subjective information on the Web. Mining subjective information enables traditional information retrieval (IR) systems to retrieve more data from human viewpoints and provide information with finer granularity. Opinion extraction identifies opinion holders, extracts the relevant opinion sentences, and decides their polarities. Opinion summarization recognizes the major events embedded in documents and summarizes the supportive and the nonsupportive evidence. Opinion tracking captures subjective information from various genres and monitors the developments of opinions from spatial and temporal dimensions. To demonstrate and evaluate the proposed opinion mining algorithms, news and bloggers' articles are adopted. Documents in the evaluation corpora are tagged in different granularities from words, sentences to documents. In the experiments, positive and negative sentiment words and their weights are mined on the basis of Chinese word structures. The f-measure is 73.18% and 63.75% for verbs and nouns, respectively. Utilizing the sentiment words mined together with topical words, we achieve f-measure 62.16% at the sentence level and 74.37% at the document level.
  18. Maaten, L. van den; Hinton, G.: Visualizing data using t-SNE (2008) 0.02
    0.015893899 = product of:
      0.031787798 = sum of:
        0.031787798 = product of:
          0.095363386 = sum of:
            0.095363386 = weight(_text_:objects in 3888) [ClassicSimilarity], result of:
              0.095363386 = score(doc=3888,freq=2.0), product of:
                0.3247862 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.06110665 = queryNorm
                0.29361898 = fieldWeight in 3888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3888)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    We present a new technique called "t-SNE" that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large data sets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of data sets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the data sets.
  19. Hofstede, A.H.M. ter; Proper, H.A.; Van der Weide, T.P.: Exploiting fact verbalisation in conceptual information modelling (1997) 0.01
    0.014488437 = product of:
      0.028976874 = sum of:
        0.028976874 = product of:
          0.05795375 = sum of:
            0.05795375 = weight(_text_:22 in 2908) [ClassicSimilarity], result of:
              0.05795375 = score(doc=2908,freq=2.0), product of:
                0.21398507 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.06110665 = queryNorm
                0.2708308 = fieldWeight in 2908, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2908)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information systems. 22(1997) nos.5/6, S.349-385
  20. Klein, H.: Web Content Mining (2004) 0.01
    0.012558327 = product of:
      0.025116654 = sum of:
        0.025116654 = product of:
          0.05023331 = sum of:
            0.05023331 = weight(_text_:wissen in 3154) [ClassicSimilarity], result of:
              0.05023331 = score(doc=3154,freq=2.0), product of:
                0.26354674 = queryWeight, product of:
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.06110665 = queryNorm
                0.19060494 = fieldWeight in 3154, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.3128977 = idf(docFreq=1609, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3154)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Wissensorganisation und Edutainment: Wissen im Spannungsfeld von Gesellschaft, Gestaltung und Industrie. Proceedings der 7. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation, Berlin, 21.-23.3.2001. Hrsg.: C. Lehner, H.P. Ohly u. G. Rahmstorf

Languages

  • e 17
  • d 12

Types

  • a 22
  • el 6
  • m 5
  • s 3
  • p 1
  • More… Less…