Search (17 results, page 1 of 1)

  • × theme_ss:"Data Mining"
  • × type_ss:"el"
  1. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.03
    0.030838627 = product of:
      0.15419313 = sum of:
        0.15419313 = sum of:
          0.10535504 = weight(_text_:data in 4261) [ClassicSimilarity], result of:
            0.10535504 = score(doc=4261,freq=14.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.7394569 = fieldWeight in 4261, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.0625 = fieldNorm(doc=4261)
          0.04883809 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
            0.04883809 = score(doc=4261,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.30952093 = fieldWeight in 4261, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=4261)
      0.2 = coord(1/5)
    
    Date
    17. 7.2002 19:22:06
    RSWK
    Data-warehouse-Konzept / Lehrbuch
    Data mining / Lehrbuch
    Subject
    Data-warehouse-Konzept / Lehrbuch
    Data mining / Lehrbuch
    Theme
    Data Mining
  2. Jäger, L.: Von Big Data zu Big Brother (2018) 0.01
    0.011780915 = product of:
      0.058904573 = sum of:
        0.058904573 = sum of:
          0.03448553 = weight(_text_:data in 5234) [ClassicSimilarity], result of:
            0.03448553 = score(doc=5234,freq=6.0), product of:
              0.14247625 = queryWeight, product of:
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.04505818 = queryNorm
              0.24204408 = fieldWeight in 5234, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.1620505 = idf(docFreq=5088, maxDocs=44218)
                0.03125 = fieldNorm(doc=5234)
          0.024419045 = weight(_text_:22 in 5234) [ClassicSimilarity], result of:
            0.024419045 = score(doc=5234,freq=2.0), product of:
              0.15778607 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04505818 = queryNorm
              0.15476047 = fieldWeight in 5234, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=5234)
      0.2 = coord(1/5)
    
    Date
    22. 1.2018 11:33:49
    Source
    https://www.heise.de/tp/features/Von-Big-Data-zu-Big-Brother-3946125.html?view=print
    Theme
    Data Mining
  3. Wongthontham, P.; Abu-Salih, B.: Ontology-based approach for semantic data extraction from social big data : state-of-the-art and research directions (2018) 0.01
    0.008447195 = product of:
      0.042235978 = sum of:
        0.042235978 = product of:
          0.084471956 = sum of:
            0.084471956 = weight(_text_:data in 4097) [ClassicSimilarity], result of:
              0.084471956 = score(doc=4097,freq=16.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.5928845 = fieldWeight in 4097, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4097)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    A challenge of managing and extracting useful knowledge from social media data sources has attracted much attention from academic and industry. To address this challenge, semantic analysis of textual data is focused in this paper. We propose an ontology-based approach to extract semantics of textual data and define the domain of data. In other words, we semantically analyse the social data at two levels i.e. the entity level and the domain level. We have chosen Twitter as a social channel challenge for a purpose of concept proof. Domain knowledge is captured in ontologies which are then used to enrich the semantics of tweets provided with specific semantic conceptual representation of entities that appear in the tweets. Case studies are used to demonstrate this approach. We experiment and evaluate our proposed approach with a public dataset collected from Twitter and from the politics domain. The ontology-based approach leverages entity extraction and concept mappings in terms of quantity and accuracy of concept identification.
    Theme
    Data Mining
  4. Nohr, H.: Big Data im Lichte der EU-Datenschutz-Grundverordnung (2017) 0.01
    0.007964092 = product of:
      0.03982046 = sum of:
        0.03982046 = product of:
          0.07964092 = sum of:
            0.07964092 = weight(_text_:data in 4076) [ClassicSimilarity], result of:
              0.07964092 = score(doc=4076,freq=8.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.5589768 = fieldWeight in 4076, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4076)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Der vorliegende Beitrag beschäftigt sich mit den Rahmenbedingungen für analytische Anwendungen wie Big Data, die durch das neue europäische Datenschutzrecht entstehen, insbesondere durch die EU-Datenschutz-Grundverordnung. Er stellt wesentliche Neuerungen vor und untersucht die spezifischen datenschutzrechtlichen Regelungen im Hinblick auf den Einsatz von Big Data sowie Voraussetzungen, die durch die Verordnung abverlangt werden.
    Theme
    Data Mining
  5. Maaten, L. van den; Hinton, G.: Visualizing data using t-SNE (2008) 0.01
    0.0074663362 = product of:
      0.03733168 = sum of:
        0.03733168 = product of:
          0.07466336 = sum of:
            0.07466336 = weight(_text_:data in 3888) [ClassicSimilarity], result of:
              0.07466336 = score(doc=3888,freq=18.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.52404076 = fieldWeight in 3888, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3888)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    We present a new technique called "t-SNE" that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large data sets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of data sets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the data sets.
    Theme
    Data Mining
  6. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.01
    0.007315486 = product of:
      0.03657743 = sum of:
        0.03657743 = product of:
          0.07315486 = sum of:
            0.07315486 = weight(_text_:data in 3884) [ClassicSimilarity], result of:
              0.07315486 = score(doc=3884,freq=12.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.513453 = fieldWeight in 3884, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3884)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
    Theme
    Data Mining
  7. Bauckhage, C.: Moderne Textanalyse : neues Wissen für intelligente Lösungen (2016) 0.01
    0.0068971063 = product of:
      0.03448553 = sum of:
        0.03448553 = product of:
          0.06897106 = sum of:
            0.06897106 = weight(_text_:data in 2568) [ClassicSimilarity], result of:
              0.06897106 = score(doc=2568,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.48408815 = fieldWeight in 2568, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2568)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Im Zuge der immer größeren Verfügbarkeit von Daten (Big Data) und rasanter Fortschritte im Daten-basierten maschinellen Lernen haben wir in den letzten Jahren Durchbrüche in der künstlichen Intelligenz erlebt. Dieser Vortrag beleuchtet diese Entwicklungen insbesondere im Hinblick auf die automatische Analyse von Textdaten. Anhand einfacher Beispiele illustrieren wir, wie moderne Textanalyse abläuft und zeigen wiederum anhand von Beispielen, welche praktischen Anwendungsmöglichkeiten sich heutzutage in Branchen wie dem Verlagswesen, der Finanzindustrie oder dem Consulting ergeben.
    Source
    https://login.mailingwork.de/public/a_5668_LVrTK/file/data/1125_Textanalyse_Christian-Bauckhage.pdf
    Theme
    Data Mining
  8. Winterhalter, C.: Licence to mine : ein Überblick über Rahmenbedingungen von Text and Data Mining und den aktuellen Stand der Diskussion (2016) 0.01
    0.0068971063 = product of:
      0.03448553 = sum of:
        0.03448553 = product of:
          0.06897106 = sum of:
            0.06897106 = weight(_text_:data in 673) [ClassicSimilarity], result of:
              0.06897106 = score(doc=673,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.48408815 = fieldWeight in 673, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=673)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Der Artikel gibt einen Überblick über die Möglichkeiten der Anwendung von Text and Data Mining (TDM) und ähnlichen Verfahren auf der Grundlage bestehender Regelungen in Lizenzverträgen zu kostenpflichtigen elektronischen Ressourcen, die Debatte über zusätzliche Lizenzen für TDM am Beispiel von Elseviers TDM Policy und den Stand der Diskussion über die Einführung von Schrankenregelungen im Urheberrecht für TDM zu nichtkommerziellen wissenschaftlichen Zwecken.
    Theme
    Data Mining
  9. Maaten, L. van den: Learning a parametric embedding by preserving local structure (2009) 0.01
    0.006034968 = product of:
      0.03017484 = sum of:
        0.03017484 = product of:
          0.06034968 = sum of:
            0.06034968 = weight(_text_:data in 3883) [ClassicSimilarity], result of:
              0.06034968 = score(doc=3883,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.42357713 = fieldWeight in 3883, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3883)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.
    Theme
    Data Mining
  10. Maaten, L. van den: Accelerating t-SNE using Tree-Based Algorithms (2014) 0.01
    0.006034968 = product of:
      0.03017484 = sum of:
        0.03017484 = product of:
          0.06034968 = sum of:
            0.06034968 = weight(_text_:data in 3886) [ClassicSimilarity], result of:
              0.06034968 = score(doc=3886,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.42357713 = fieldWeight in 3886, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3886)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The paper investigates the acceleration of t-SNE-an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots-using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N*logN). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant.
    Theme
    Data Mining
  11. Wattenberg, M.; Viégas, F.; Johnson, I.: How to use t-SNE effectively (2016) 0.01
    0.0056314636 = product of:
      0.028157318 = sum of:
        0.028157318 = product of:
          0.056314636 = sum of:
            0.056314636 = weight(_text_:data in 3887) [ClassicSimilarity], result of:
              0.056314636 = score(doc=3887,freq=4.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.3952563 = fieldWeight in 3887, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3887)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Although extremely useful for visualizing high-dimensional data, t-SNE plots can sometimes be mysterious or misleading. By exploring how it behaves in simple cases, we can learn to use it more effectively. We'll walk through a series of simple examples to illustrate what t-SNE diagrams can and cannot show. The t-SNE technique really is useful-but only if you know how to interpret it.
    Theme
    Data Mining
  12. Datentracking in der Wissenschaft : Aggregation und Verwendung bzw. Verkauf von Nutzungsdaten durch Wissenschaftsverlage. Ein Informationspapier des Ausschusses für Wissenschaftliche Bibliotheken und Informationssysteme der Deutschen Forschungsgemeinschaft (2021) 0.00
    0.0042235977 = product of:
      0.021117989 = sum of:
        0.021117989 = product of:
          0.042235978 = sum of:
            0.042235978 = weight(_text_:data in 248) [ClassicSimilarity], result of:
              0.042235978 = score(doc=248,freq=4.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.29644224 = fieldWeight in 248, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=248)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Das Informationspapier beschreibt die digitale Nachverfolgung von wissenschaftlichen Aktivitäten. Wissenschaftlerinnen und Wissenschaftler nutzen täglich eine Vielzahl von digitalen Informationsressourcen wie zum Beispiel Literatur- und Volltextdatenbanken. Häufig fallen dabei Nutzungsspuren an, die Aufschluss geben über gesuchte und genutzte Inhalte, Verweildauern und andere Arten der wissenschaftlichen Aktivität. Diese Nutzungsspuren können von den Anbietenden der Informationsressourcen festgehalten, aggregiert und weiterverwendet oder verkauft werden. Das Informationspapier legt die Transformation von Wissenschaftsverlagen hin zu Data Analytics Businesses dar, verweist auf die Konsequenzen daraus für die Wissenschaft und deren Einrichtungen und benennt die zum Einsatz kommenden Typen der Datengewinnung. Damit dient es vor allem der Darstellung gegenwärtiger Praktiken und soll zu Diskussionen über deren Konsequenzen für die Wissenschaft anregen. Es richtet sich an alle Wissenschaftlerinnen und Wissenschaftler sowie alle Akteure in der Wissenschaftslandschaft.
    Theme
    Data Mining
  13. Kipcic, O.; Cramer, C.: Wie Zeitungsinhalte Forschung und Entwicklung befördern (2017) 0.00
    0.00348429 = product of:
      0.01742145 = sum of:
        0.01742145 = product of:
          0.0348429 = sum of:
            0.0348429 = weight(_text_:data in 3885) [ClassicSimilarity], result of:
              0.0348429 = score(doc=3885,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.24455236 = fieldWeight in 3885, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3885)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Theme
    Data Mining
  14. Cohen, D.J.: From Babel to knowledge : data mining large digital collections (2006) 0.00
    0.0034485532 = product of:
      0.017242765 = sum of:
        0.017242765 = product of:
          0.03448553 = sum of:
            0.03448553 = weight(_text_:data in 1178) [ClassicSimilarity], result of:
              0.03448553 = score(doc=1178,freq=6.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.24204408 = fieldWeight in 1178, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    In Jorge Luis Borges's curious short story The Library of Babel, the narrator describes an endless collection of books stored from floor to ceiling in a labyrinth of countless hexagonal rooms. The pages of the library's books seem to contain random sequences of letters and spaces; occasionally a few intelligible words emerge in the sea of paper and ink. Nevertheless, readers diligently, and exasperatingly, scan the shelves for coherent passages. The narrator himself has wandered numerous rooms in search of enlightenment, but with resignation he simply awaits his death and burial - which Borges explains (with signature dark humor) consists of being tossed unceremoniously over the library's banister. Borges's nightmare, of course, is a cursed vision of the research methods of disciplines such as literature, history, and philosophy, where the careful reading of books, one after the other, is supposed to lead inexorably to knowledge and understanding. Computer scientists would approach Borges's library far differently. Employing the information theory that forms the basis for search engines and other computerized techniques for assessing in one fell swoop large masses of documents, they would quickly realize the collection's incoherence though sampling and statistical methods - and wisely start looking for the library's exit. These computational methods, which allow us to find patterns, determine relationships, categorize documents, and extract information from massive corpuses, will form the basis for new tools for research in the humanities and other disciplines in the coming decade. For the past three years I have been experimenting with how to provide such end-user tools - that is, tools that harness the power of vast electronic collections while hiding much of their complicated technical plumbing. In particular, I have made extensive use of the application programming interfaces (APIs) the leading search engines provide for programmers to query their databases directly (from server to server without using their web interfaces). In addition, I have explored how one might extract information from large digital collections, from the well-curated lexicographic database WordNet to the democratic (and poorly curated) online reference work Wikipedia. While processing these digital corpuses is currently an imperfect science, even now useful tools can be created by combining various collections and methods for searching and analyzing them. And more importantly, these nascent services suggest a future in which information can be gleaned from, and sense can be made out of, even imperfect digital libraries of enormous scale. A brief examination of two approaches to data mining large digital collections hints at this future, while also providing some lessons about how to get there.
    Theme
    Data Mining
  15. Mohr, J.W.; Bogdanov, P.: Topic models : what they are and why they matter (2013) 0.00
    0.0029865343 = product of:
      0.014932672 = sum of:
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 1142) [ClassicSimilarity], result of:
              0.029865343 = score(doc=1142,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 1142, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1142)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Theme
    Data Mining
  16. Kraker, P.; Kittel, C,; Enkhbayar, A.: Open Knowledge Maps : creating a visual interface to the world's scientific knowledge based on natural language processing (2016) 0.00
    0.0029865343 = product of:
      0.014932672 = sum of:
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 3205) [ClassicSimilarity], result of:
              0.029865343 = score(doc=3205,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 3205, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3205)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Theme
    Data Mining
  17. Loonus, Y.: Einsatzbereiche der KI und ihre Relevanz für Information Professionals (2017) 0.00
    0.0029865343 = product of:
      0.014932672 = sum of:
        0.014932672 = product of:
          0.029865343 = sum of:
            0.029865343 = weight(_text_:data in 5668) [ClassicSimilarity], result of:
              0.029865343 = score(doc=5668,freq=2.0), product of:
                0.14247625 = queryWeight, product of:
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.04505818 = queryNorm
                0.2096163 = fieldWeight in 5668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1620505 = idf(docFreq=5088, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5668)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Theme
    Data Mining