Search (56 results, page 1 of 3)

  • × theme_ss:"Data Mining"
  1. KDD : techniques and applications (1998) 0.13
    0.13394961 = product of:
      0.26789922 = sum of:
        0.26789922 = sum of:
          0.185057 = weight(_text_:discovery in 6783) [ClassicSimilarity], result of:
            0.185057 = score(doc=6783,freq=2.0), product of:
              0.26668423 = queryWeight, product of:
                5.2338576 = idf(docFreq=640, maxDocs=44218)
                0.050953664 = queryNorm
              0.69391805 = fieldWeight in 6783, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2338576 = idf(docFreq=640, maxDocs=44218)
                0.09375 = fieldNorm(doc=6783)
          0.082842216 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
            0.082842216 = score(doc=6783,freq=2.0), product of:
              0.17843105 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050953664 = queryNorm
              0.46428138 = fieldWeight in 6783, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.09375 = fieldNorm(doc=6783)
      0.5 = coord(1/2)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  2. Chen, Z.: Knowledge discovery and system-user partnership : on a production 'adversarial partnership' approach (1994) 0.07
    0.06896667 = product of:
      0.13793334 = sum of:
        0.13793334 = product of:
          0.2758667 = sum of:
            0.2758667 = weight(_text_:discovery in 6759) [ClassicSimilarity], result of:
              0.2758667 = score(doc=6759,freq=10.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                1.0344319 = fieldWeight in 6759, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6759)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Examines the relationship between systems and users from the knowledge discovery in databases or data mining perspecitives. A comprehensive study on knowledge discovery in human computer symbiosis is needed. Proposes a database-user adversarial partnership, which is general enough to cover knowledge discovery and security of issues related to databases and their users. It can be further generalized into system-user adversarial paertnership. Discusses opportunities provided by knowledge discovery techniques and potential social implications
  3. Fayyad, U.; Piatetsky-Shapiro, G.; Smyth, P.: From data mining to knowledge discovery in databases (1996) 0.07
    0.06677669 = product of:
      0.13355339 = sum of:
        0.13355339 = product of:
          0.26710677 = sum of:
            0.26710677 = weight(_text_:discovery in 7458) [ClassicSimilarity], result of:
              0.26710677 = score(doc=7458,freq=6.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                1.0015844 = fieldWeight in 7458, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7458)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Gives an overview of data mining and knowledge discovery in databases. Clarifies how they are related both to each other and to related fields. Mentions real world applications data mining techniques, challenges involved in real world applications of knowledge discovery, and current and future research directions
  4. Knowledge discovery and data mining (1998) 0.07
    0.06542753 = product of:
      0.13085505 = sum of:
        0.13085505 = product of:
          0.2617101 = sum of:
            0.2617101 = weight(_text_:discovery in 2898) [ClassicSimilarity], result of:
              0.2617101 = score(doc=2898,freq=4.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.9813483 = fieldWeight in 2898, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2898)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    A special issue devoted to knowledge discovery and data mining
  5. Raghavan, V.V.; Deogun, J.S.; Sever, H.: Knowledge discovery and data mining : introduction (1998) 0.05
    0.053974956 = product of:
      0.10794991 = sum of:
        0.10794991 = product of:
          0.21589983 = sum of:
            0.21589983 = weight(_text_:discovery in 2899) [ClassicSimilarity], result of:
              0.21589983 = score(doc=2899,freq=8.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.809571 = fieldWeight in 2899, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2899)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Defines knowledge discovery and database mining. The challenge for knowledge discovery in databases (KDD) is to automatically process large quantities of raw data, identifying the most significant and meaningful patterns, and present these as as knowledge appropriate for achieving a user's goals. Data mining is the process of deriving useful knowledge from real world databases through the application of pattern extraction techniques. Explains the goals of, and motivation for, research work on data mining. Discusses the nature of database contents, along with problems within the field of data mining
    Footnote
    Contribution to a special issue devoted to knowledge discovery and data mining
  6. Information visualization in data mining and knowledge discovery (2002) 0.05
    0.053167764 = product of:
      0.10633553 = sum of:
        0.10633553 = sum of:
          0.09252849 = weight(_text_:discovery in 1789) [ClassicSimilarity], result of:
            0.09252849 = score(doc=1789,freq=18.0), product of:
              0.26668423 = queryWeight, product of:
                5.2338576 = idf(docFreq=640, maxDocs=44218)
                0.050953664 = queryNorm
              0.346959 = fieldWeight in 1789, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                5.2338576 = idf(docFreq=640, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
          0.013807036 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.013807036 = score(doc=1789,freq=2.0), product of:
              0.17843105 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050953664 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
      0.5 = coord(1/2)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  7. Wu, K.J.; Chen, M.-C.; Sun, Y.: Automatic topics discovery from hyperlinked documents (2004) 0.05
    0.051725004 = product of:
      0.10345001 = sum of:
        0.10345001 = product of:
          0.20690002 = sum of:
            0.20690002 = weight(_text_:discovery in 2563) [ClassicSimilarity], result of:
              0.20690002 = score(doc=2563,freq=10.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.77582395 = fieldWeight in 2563, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2563)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Topic discovery is an important means for marketing, e-Business and social science studies. As well, it can be applied to various purposes, such as identifying a group with certain properties and observing the emergence and diminishment of a certain cyber community. Previous topic discovery work (J.M. Kleinberg, Proceedings of the 9th Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, California, p. 668) requires manual judgment of usefulness of outcomes and is thus incapable of handling the explosive growth of the Internet. In this paper, we propose the Automatic Topic Discovery (ATD) method, which combines a method of base set construction, a clustering algorithm and an iterative principal eigenvector computation method to discover the topics relevant to a given query without using manual examination. Given a query, ATD returns with topics associated with the query and top representative pages for each topic. Our experiments show that the ATD method performs better than the traditional eigenvector method in terms of computation time and topic discovery quality.
  8. Trybula, W.J.: Data mining and knowledge discovery (1997) 0.05
    0.046743687 = product of:
      0.093487374 = sum of:
        0.093487374 = product of:
          0.18697475 = sum of:
            0.18697475 = weight(_text_:discovery in 2300) [ClassicSimilarity], result of:
              0.18697475 = score(doc=2300,freq=6.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.7011091 = fieldWeight in 2300, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2300)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    State of the art review of the recently developed concepts of data mining (defined as the automated process of evaluating data and finding relationships) and knowledge discovery (defined as the automated process of extracting information, especially unpredicted relationships or previously unknown patterns among the data) with particular reference to numerical data. Includes: the knowledge acquisition process; data mining; evaluation methods; and knowledge discovery. Concludes that existing work in the field are confusing because the terminology is inconsistent and poorly defined. Although methods are available for analyzing and cleaning databases, better coordinated efforts should be directed toward providing users with improved means of structuring search mechanisms to explore the data for relationships
  9. Bell, D.A.; Guan, J.W.: Computational methods for rough classification and discovery (1998) 0.05
    0.046743687 = product of:
      0.093487374 = sum of:
        0.093487374 = product of:
          0.18697475 = sum of:
            0.18697475 = weight(_text_:discovery in 2909) [ClassicSimilarity], result of:
              0.18697475 = score(doc=2909,freq=6.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.7011091 = fieldWeight in 2909, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2909)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Rough set theory is a mathematical tool to deal with vagueness and uncertainty. To apply the theory, it needs to be associated with efficient and effective computational methods. A relation can be used to represent a decison table for use in decision making. By using this kind of table, rough set theory can be applied successfully to rough classification and knowledge discovery. Presents computational methods for using rough sets to identify classes in datasets, finding dependencies in relations, and discovering rules which are hidden in databases. Illustrates the methods with a running example from a database of car test results
    Footnote
    Contribution to a special issue devoted to knowledge discovery and data mining
  10. Galal, G.M.; Cook, D.J.; Holder, L.B.: Exploiting parallelism in a structural scientific discovery system to improve scalability (1999) 0.05
    0.04626425 = product of:
      0.0925285 = sum of:
        0.0925285 = product of:
          0.185057 = sum of:
            0.185057 = weight(_text_:discovery in 2952) [ClassicSimilarity], result of:
              0.185057 = score(doc=2952,freq=8.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.69391805 = fieldWeight in 2952, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2952)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The large amount of data collected today is quickly overwhelming researchers' abilities to interpret the data and discover interesting patterns. Knowledge discovery and data mining approaches hold the potential to automate the interpretation process, but these approaches frequently utilize computationally expensive algorithms. In particular, scientific discovery systems focus on the utilization of richer data representation, sometimes without regard for scalability. This research investigates approaches for scaling a particular knowledge discovery in databases (KDD) system, SUBDUE, using parallel and distributed resources. SUBDUE has been used to discover interesting and repetitive concepts in graph-based databases from a variety of domains, but requires a substantial amount of processing time. Experiments that demonstrate scalability of parallel versions of the SUBDUE system are performed using CAD circuit databases and artificially-generated databases, and potential achievements and obstacles are discussed
  11. Cios, K.J.; Pedrycz, W.; Swiniarksi, R.: Data mining methods for knowledge discovery (1998) 0.05
    0.04626425 = product of:
      0.0925285 = sum of:
        0.0925285 = product of:
          0.185057 = sum of:
            0.185057 = weight(_text_:discovery in 6075) [ClassicSimilarity], result of:
              0.185057 = score(doc=6075,freq=2.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.69391805 = fieldWeight in 6075, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6075)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  12. Advances in knowledge discovery and data mining (1996) 0.05
    0.04626425 = product of:
      0.0925285 = sum of:
        0.0925285 = product of:
          0.185057 = sum of:
            0.185057 = weight(_text_:discovery in 413) [ClassicSimilarity], result of:
              0.185057 = score(doc=413,freq=2.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.69391805 = fieldWeight in 413, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.09375 = fieldNorm(doc=413)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Ester, M.; Sander, J.: Knowledge discovery in databases : Techniken und Anwendungen (2000) 0.04
    0.043618355 = product of:
      0.08723671 = sum of:
        0.08723671 = product of:
          0.17447342 = sum of:
            0.17447342 = weight(_text_:discovery in 1374) [ClassicSimilarity], result of:
              0.17447342 = score(doc=1374,freq=4.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.6542322 = fieldWeight in 1374, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1374)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Knowledge Discovery in Databases (KDD) ist ein aktuelles Forschungs- und Anwendungsgebiet der Informatik. Ziel des KDD ist es, selbständig entscheidungsrelevante, aber bisher unbekannte Zusammenhänge und Verknüpfungen in den Daten großer Datenmengen zu entdecken und dem Analysten oder dem Anwender in übersichtlicher Form zu präsentieren. Die Autoren stellen die Techniken und Anwendungen dieses interdisziplinären Gebiets anschaulich dar.
  14. Benoit, G.: Data mining (2002) 0.04
    0.040066015 = product of:
      0.08013203 = sum of:
        0.08013203 = product of:
          0.16026406 = sum of:
            0.16026406 = weight(_text_:discovery in 4296) [ClassicSimilarity], result of:
              0.16026406 = score(doc=4296,freq=6.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.60095066 = fieldWeight in 4296, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4296)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Data mining (DM) is a multistaged process of extracting previously unanticipated knowledge from large databases, and applying the results to decision making. Data mining tools detect patterns from the data and infer associations and rules from them. The extracted information may then be applied to prediction or classification models by identifying relations within the data records or between databases. Those patterns and rules can then guide decision making and forecast the effects of those decisions. However, this definition may be applied equally to "knowledge discovery in databases" (KDD). Indeed, in the recent literature of DM and KDD, a source of confusion has emerged, making it difficult to determine the exact parameters of both. KDD is sometimes viewed as the broader discipline, of which data mining is merely a component-specifically pattern extraction, evaluation, and cleansing methods (Raghavan, Deogun, & Sever, 1998, p. 397). Thurasingham (1999, p. 2) remarked that "knowledge discovery," "pattern discovery," "data dredging," "information extraction," and "knowledge mining" are all employed as synonyms for DM. Trybula, in his ARIST chapter an text mining, observed that the "existing work [in KDD] is confusing because the terminology is inconsistent and poorly defined.
  15. Loh, S.; Oliveira, J.P.M. de; Gastal, F.L.: Knowledge discovery in textual documentation : qualitative and quantitative analyses (2001) 0.04
    0.040066015 = product of:
      0.08013203 = sum of:
        0.08013203 = product of:
          0.16026406 = sum of:
            0.16026406 = weight(_text_:discovery in 4482) [ClassicSimilarity], result of:
              0.16026406 = score(doc=4482,freq=6.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.60095066 = fieldWeight in 4482, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4482)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper presents an approach for performing knowledge discovery in texts through qualitative and quantitative analyses of high-level textual characteristics. Instead of applying mining techniques on attribute values, terms or keywords extracted from texts, the discovery process works over conceptss identified in texts. Concepts represent real world events and objects, and they help the user to understand ideas, trends, thoughts, opinions and intentions present in texts. The approach combines a quasi-automatic categorisation task (for qualitative analysis) with a mining process (for quantitative analysis). The goal is to find new and useful knowledge inside a textual collection through the use of mining techniques applied over concepts (representing text content). In this paper, an application of the approach to medical records of a psychiatric hospital is presented. The approach helps physicians to extract knowledge about patients and diseases. This knowledge may be used for epidemiological studies, for training professionals and it may be also used to support physicians to diagnose and evaluate diseases.
  16. Kraker, P.; Kittel, C,; Enkhbayar, A.: Open Knowledge Maps : creating a visual interface to the world's scientific knowledge based on natural language processing (2016) 0.04
    0.040066015 = product of:
      0.08013203 = sum of:
        0.08013203 = product of:
          0.16026406 = sum of:
            0.16026406 = weight(_text_:discovery in 3205) [ClassicSimilarity], result of:
              0.16026406 = score(doc=3205,freq=6.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.60095066 = fieldWeight in 3205, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3205)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The goal of Open Knowledge Maps is to create a visual interface to the world's scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps.
  17. Fayyad, U.M.: Data mining and knowledge dicovery : making sense out of data (1996) 0.04
    0.038553543 = product of:
      0.07710709 = sum of:
        0.07710709 = product of:
          0.15421417 = sum of:
            0.15421417 = weight(_text_:discovery in 7007) [ClassicSimilarity], result of:
              0.15421417 = score(doc=7007,freq=2.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.5782651 = fieldWeight in 7007, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.078125 = fieldNorm(doc=7007)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Defines knowledge discovery and data mining (KDD) as the overall process of extracting high level knowledge from low level data. Outlines the KDD process. Explains how KDD is related to the fields of: statistics, pattern recognition, machine learning, artificial intelligence, databases and data warehouses
  18. Knowledge management in fuzzy databases (2000) 0.04
    0.03816606 = product of:
      0.07633212 = sum of:
        0.07633212 = product of:
          0.15266424 = sum of:
            0.15266424 = weight(_text_:discovery in 4260) [ClassicSimilarity], result of:
              0.15266424 = score(doc=4260,freq=4.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.5724532 = fieldWeight in 4260, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4260)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The volume presents recent developments in the introduction of fuzzy, probabilistic and rough elements into basic components of fuzzy databases, and their use (notably querying and information retrieval), from the point of view of data mining and knowledge discovery. The main novel aspect of the volume is that issues related to the use of fuzzy elements in databases, database querying, information retrieval, etc. are presented and discussed from the point of view, and for the purpose of data mining and knowledge discovery that are 'hot topics' in recent years
  19. Wei, C.-P.; Lee, Y.-H.; Chiang, Y.-S.; Chen, C.-T.; Yang, C.C.C.: Exploiting temporal characteristics of features for effectively discovering event episodes from news corpora (2014) 0.03
    0.033388346 = product of:
      0.06677669 = sum of:
        0.06677669 = product of:
          0.13355339 = sum of:
            0.13355339 = weight(_text_:discovery in 1225) [ClassicSimilarity], result of:
              0.13355339 = score(doc=1225,freq=6.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.5007922 = fieldWeight in 1225, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1225)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An organization performing environmental scanning generally monitors or tracks various events concerning its external environment. One of the major resources for environmental scanning is online news documents, which are readily accessible on news websites or infomediaries. However, the proliferation of the World Wide Web, which increases information sources and improves information circulation, has vastly expanded the amount of information to be scanned. Thus, it is essential to develop an effective event episode discovery mechanism to organize news documents pertaining to an event of interest. In this study, we propose two new metrics, Term Frequency × Inverse Document FrequencyTempo (TF×IDFTempo) and TF×Enhanced-IDFTempo, and develop a temporal-based event episode discovery (TEED) technique that uses the proposed metrics for feature selection and document representation. Using a traditional TF×IDF-based hierarchical agglomerative clustering technique as a performance benchmark, our empirical evaluation reveals that the proposed TEED technique outperforms its benchmark, as measured by cluster recall and cluster precision. In addition, the use of TF×Enhanced-IDFTempo significantly improves the effectiveness of event episode discovery when compared with the use of TF×IDFTempo.
  20. Pons-Porrata, A.; Berlanga-Llavori, R.; Ruiz-Shulcloper, J.: Topic discovery based on text mining techniques (2007) 0.03
    0.032713763 = product of:
      0.06542753 = sum of:
        0.06542753 = product of:
          0.13085505 = sum of:
            0.13085505 = weight(_text_:discovery in 916) [ClassicSimilarity], result of:
              0.13085505 = score(doc=916,freq=4.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.49067414 = fieldWeight in 916, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.046875 = fieldNorm(doc=916)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper, we present a topic discovery system aimed to reveal the implicit knowledge present in news streams. This knowledge is expressed as a hierarchy of topic/subtopics, where each topic contains the set of documents that are related to it and a summary extracted from these documents. Summaries so built are useful to browse and select topics of interest from the generated hierarchies. Our proposal consists of a new incremental hierarchical clustering algorithm, which combines both partitional and agglomerative approaches, taking the main benefits from them. Finally, a new summarization method based on Testor Theory has been proposed to build the topic summaries. Experimental results in the TDT2 collection demonstrate its usefulness and effectiveness not only as a topic detection system, but also as a classification and summarization tool.

Years

Languages

  • e 48
  • d 8

Types

  • a 41
  • m 10
  • s 10
  • el 3
  • More… Less…