Search (60 results, page 1 of 3)

  • × theme_ss:"Data Mining"
  1. Ebrahimi, M.; ShafieiBavani, E.; Wong, R.; Chen, F.: Twitter user geolocation by filtering of highly mentioned users (2018) 0.05
    0.054253697 = product of:
      0.108507395 = sum of:
        0.09592469 = weight(_text_:media in 4286) [ClassicSimilarity], result of:
          0.09592469 = score(doc=4286,freq=4.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.43911293 = fieldWeight in 4286, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046875 = fieldNorm(doc=4286)
        0.012582705 = product of:
          0.02516541 = sum of:
            0.02516541 = weight(_text_:research in 4286) [ClassicSimilarity], result of:
              0.02516541 = score(doc=4286,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.18912788 = fieldWeight in 4286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4286)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Geolocated social media data provide a powerful source of information about places and regional human behavior. Because only a small amount of social media data have been geolocation-annotated, inference techniques play a substantial role to increase the volume of annotated data. Conventional research in this area has been based on the text content of posts from a given user or the social network of the user, with some recent crossovers between the text- and network-based approaches. This paper proposes a novel approach to categorize highly-mentioned users (celebrities) into Local and Global types, and consequently use Local celebrities as location indicators. A label propagation algorithm is then used over the refined social network for geolocation inference. Finally, we propose a hybrid approach by merging a text-based method as a back-off strategy into our network-based approach. Empirical experiments over three standard Twitter benchmark data sets demonstrate that our approach outperforms state-of-the-art user geolocation methods.
  2. Wongthontham, P.; Abu-Salih, B.: Ontology-based approach for semantic data extraction from social big data : state-of-the-art and research directions (2018) 0.04
    0.04020585 = product of:
      0.0804117 = sum of:
        0.067829 = weight(_text_:media in 4097) [ClassicSimilarity], result of:
          0.067829 = score(doc=4097,freq=2.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.31049973 = fieldWeight in 4097, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046875 = fieldNorm(doc=4097)
        0.012582705 = product of:
          0.02516541 = sum of:
            0.02516541 = weight(_text_:research in 4097) [ClassicSimilarity], result of:
              0.02516541 = score(doc=4097,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.18912788 = fieldWeight in 4097, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4097)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    A challenge of managing and extracting useful knowledge from social media data sources has attracted much attention from academic and industry. To address this challenge, semantic analysis of textual data is focused in this paper. We propose an ontology-based approach to extract semantics of textual data and define the domain of data. In other words, we semantically analyse the social data at two levels i.e. the entity level and the domain level. We have chosen Twitter as a social channel challenge for a purpose of concept proof. Domain knowledge is captured in ontologies which are then used to enrich the semantics of tweets provided with specific semantic conceptual representation of entities that appear in the tweets. Case studies are used to demonstrate this approach. We experiment and evaluate our proposed approach with a public dataset collected from Twitter and from the politics domain. The ontology-based approach leverages entity extraction and concept mappings in terms of quantity and accuracy of concept identification.
  3. Survey of text mining : clustering, classification, and retrieval (2004) 0.03
    0.033504877 = product of:
      0.067009754 = sum of:
        0.056524165 = weight(_text_:media in 804) [ClassicSimilarity], result of:
          0.056524165 = score(doc=804,freq=2.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.25874978 = fieldWeight in 804, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.0390625 = fieldNorm(doc=804)
        0.010485589 = product of:
          0.020971177 = sum of:
            0.020971177 = weight(_text_:research in 804) [ClassicSimilarity], result of:
              0.020971177 = score(doc=804,freq=2.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.15760657 = fieldWeight in 804, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=804)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
  4. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.03
    0.028720377 = product of:
      0.057440754 = sum of:
        0.047962345 = weight(_text_:media in 1833) [ClassicSimilarity], result of:
          0.047962345 = score(doc=1833,freq=4.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.21955647 = fieldWeight in 1833, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.00947841 = product of:
          0.01895682 = sum of:
            0.01895682 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
              0.01895682 = score(doc=1833,freq=2.0), product of:
                0.16332182 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046639 = queryNorm
                0.116070345 = fieldWeight in 1833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1833)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    11. 5.2008 19:49:22
    LCSH
    Mass media / Archival resources / Congresses
    Subject
    Mass media / Archival resources / Congresses
  5. Mining text data (2012) 0.03
    0.028541211 = product of:
      0.057082422 = sum of:
        0.045219332 = weight(_text_:media in 362) [ClassicSimilarity], result of:
          0.045219332 = score(doc=362,freq=2.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.20699982 = fieldWeight in 362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.03125 = fieldNorm(doc=362)
        0.011863088 = product of:
          0.023726176 = sum of:
            0.023726176 = weight(_text_:research in 362) [ClassicSimilarity], result of:
              0.023726176 = score(doc=362,freq=4.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.17831147 = fieldWeight in 362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.03125 = fieldNorm(doc=362)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Text mining applications have experienced tremendous advances because of web 2.0 and social networking applications. Recent advances in hardware and software technology have lead to a number of unique scenarios where text mining algorithms are learned. Mining Text Data introduces an important niche in the text analytics field, and is an edited volume contributed by leading international researchers and practitioners focused on social networks & data mining. This book contains a wide swath in topics across social networks & data mining. Each chapter contains a comprehensive survey including the key research content on the topic, and the future directions of research in the field. There is a special focus on Text Embedded with Heterogeneous and Multimedia Data which makes the mining process much more challenging. A number of methods have been designed such as transfer learning and cross-lingual mining for such cases. Mining Text Data simplifies the content, so that advanced-level students, practitioners and researchers in computer science can benefit from this book. Academic and corporate libraries, as well as ACM, IEEE, and Management Science focused on information security, electronic commerce, databases, data mining, machine learning, and statistics are the primary buyers for this reference book.
    Content
    Inhalt: An Introduction to Text Mining.- Information Extraction from Text.- A Survey of Text Summarization Techniques.- A Survey of Text Clustering Algorithms.- Dimensionality Reduction and Topic Modeling.- A Survey of Text Classification Algorithms.- Transfer Learning for Text Mining.- Probabilistic Models for Text Mining.- Mining Text Streams.- Translingual Mining from Text Data.- Text Mining in Multimedia.- Text Analytics in Social Media.- A Survey of Opinion Mining and Sentiment Analysis.- Biomedical Text Mining: A Survey of Recent Progress.- Index.
  6. Classification, automation, and new media : Proceedings of the 24th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Passau, March 15 - 17, 2000 (2002) 0.02
    0.01998431 = product of:
      0.07993724 = sum of:
        0.07993724 = weight(_text_:media in 5997) [ClassicSimilarity], result of:
          0.07993724 = score(doc=5997,freq=4.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.36592746 = fieldWeight in 5997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.25 = coord(1/4)
    
    Content
    Data Analysis, Statistics, and Classification.- Pattern Recognition and Automation.- Data Mining, Information Processing, and Automation.- New Media, Web Mining, and Automation.- Applications in Management Science, Finance, and Marketing.- Applications in Medicine, Biology, Archaeology, and Others.- Author Index.- Subject Index.
  7. Liu, W.; Weichselbraun, A.; Scharl, A.; Chang, E.: Semi-automatic ontology extension using spreading activation (2005) 0.02
    0.019783458 = product of:
      0.07913383 = sum of:
        0.07913383 = weight(_text_:media in 3028) [ClassicSimilarity], result of:
          0.07913383 = score(doc=3028,freq=2.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.3622497 = fieldWeight in 3028, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3028)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes a system to semi-automatically extend and refine ontologies by mining textual data from the Web sites of international online media. Expanding a seed ontology creates a semantic network through co-occurrence analysis, trigger phrase analysis, and disambiguation based on the WordNet lexical dictionary. Spreading activation then processes this semantic network to find the most probable candidates for inclusion in an extended ontology. Approaches to identifying hierarchical relationships such as subsumption, head noun analysis and WordNet consultation are used to confirm and classify the found relationships. Using a seed ontology on "climate change" as an example, this paper demonstrates how spreading activation improves the result by naturally integrating the mentioned methods.
  8. Hallonsten, O.; Holmberg, D.: Analyzing structural stratification in the Swedish higher education system : data contextualization with policy-history analysis (2013) 0.02
    0.016979462 = product of:
      0.067917846 = sum of:
        0.067917846 = sum of:
          0.036323145 = weight(_text_:research in 668) [ClassicSimilarity], result of:
            0.036323145 = score(doc=668,freq=6.0), product of:
              0.13306029 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046639 = queryNorm
              0.2729826 = fieldWeight in 668, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0390625 = fieldNorm(doc=668)
          0.0315947 = weight(_text_:22 in 668) [ClassicSimilarity], result of:
            0.0315947 = score(doc=668,freq=2.0), product of:
              0.16332182 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046639 = queryNorm
              0.19345059 = fieldWeight in 668, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=668)
      0.25 = coord(1/4)
    
    Abstract
    20th century massification of higher education and research in academia is said to have produced structurally stratified higher education systems in many countries. Most manifestly, the research mission of universities appears to be divisive. Authors have claimed that the Swedish system, while formally unified, has developed into a binary state, and statistics seem to support this conclusion. This article makes use of a comprehensive statistical data source on Swedish higher education institutions to illustrate stratification, and uses literature on Swedish research policy history to contextualize the statistics. Highlighting the opportunities as well as constraints of the data, the article argues that there is great merit in combining statistics with a qualitative analysis when studying the structural characteristics of national higher education systems. Not least the article shows that it is an over-simplification to describe the Swedish system as binary; the stratification is more complex. On basis of the analysis, the article also argues that while global trends certainly influence national developments, higher education systems have country-specific features that may enrich the understanding of how systems evolve and therefore should be analyzed as part of a broader study of the increasingly globalized academic system.
    Date
    22. 3.2013 19:43:01
  9. Heyer, G.; Läuter, M.; Quasthoff, U.; Wolff, C.: Texttechnologische Anwendungen am Beispiel Text Mining (2000) 0.02
    0.01695725 = product of:
      0.067829 = sum of:
        0.067829 = weight(_text_:media in 5565) [ClassicSimilarity], result of:
          0.067829 = score(doc=5565,freq=2.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.31049973 = fieldWeight in 5565, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046875 = fieldNorm(doc=5565)
      0.25 = coord(1/4)
    
    Source
    Sprachtechnologie für eine dynamische Wirtschaft im Medienzeitalter - Language technologies for dynamic business in the age of the media - L'ingénierie linguistique au service de la dynamisation économique à l'ère du multimédia: Tagungsakten der XXVI. Jahrestagung der Internationalen Vereinigung Sprache und Wirtschaft e.V., 23.-25.11.2000, Fachhochschule Köln. Hrsg.: K.-D. Schmitz
  10. Ohly, H.P.: Bibliometric mining : added value from document analysis and retrieval (2008) 0.02
    0.01695725 = product of:
      0.067829 = sum of:
        0.067829 = weight(_text_:media in 2386) [ClassicSimilarity], result of:
          0.067829 = score(doc=2386,freq=2.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.31049973 = fieldWeight in 2386, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046875 = fieldNorm(doc=2386)
      0.25 = coord(1/4)
    
    Source
    Kompatibilität, Medien und Ethik in der Wissensorganisation - Compatibility, Media and Ethics in Knowledge Organization: Proceedings der 10. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation Wien, 3.-5. Juli 2006 - Proceedings of the 10th Conference of the German Section of the International Society of Knowledge Organization Vienna, 3-5 July 2006. Ed.: H.P. Ohly, S. Netscher u. K. Mitgutsch
  11. Bella, A. La; Fronzetti Colladon, A.; Battistoni, E.; Castellan, S.; Francucci, M.: Assessing perceived organizational leadership styles through twitter text mining (2018) 0.02
    0.01695725 = product of:
      0.067829 = sum of:
        0.067829 = weight(_text_:media in 2400) [ClassicSimilarity], result of:
          0.067829 = score(doc=2400,freq=2.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.31049973 = fieldWeight in 2400, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046875 = fieldNorm(doc=2400)
      0.25 = coord(1/4)
    
    Abstract
    We propose a text classification tool based on support vector machines for the assessment of organizational leadership styles, as appearing to Twitter users. We collected Twitter data over 51 days, related to the first 30 Italian organizations in the 2015 ranking of Forbes Global 2000-out of which we selected the five with the most relevant volumes of tweets. We analyzed the communication of the company leaders, together with the dialogue among the stakeholders of each company, to understand the association with perceived leadership styles and dimensions. To assess leadership profiles, we referred to the 10-factor model developed by Barchiesi and La Bella in 2007. We maintain the distinctiveness of the approach we propose, as it allows a rapid assessment of the perceived leadership capabilities of an enterprise, as they emerge from its social media interactions. It can also be used to show how companies respond and manage their communication when specific events take place, and to assess their stakeholder's reactions.
  12. Tonkin, E.L.; Tourte, G.J.L.: Working with text. tools, techniques and approaches for text mining (2016) 0.01
    0.014131041 = product of:
      0.056524165 = sum of:
        0.056524165 = weight(_text_:media in 4019) [ClassicSimilarity], result of:
          0.056524165 = score(doc=4019,freq=2.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.25874978 = fieldWeight in 4019, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4019)
      0.25 = coord(1/4)
    
    Abstract
    What is text mining, and how can it be used? What relevance do these methods have to everyday work in information science and the digital humanities? How does one develop competences in text mining? Working with Text provides a series of cross-disciplinary perspectives on text mining and its applications. As text mining raises legal and ethical issues, the legal background of text mining and the responsibilities of the engineer are discussed in this book. Chapters provide an introduction to the use of the popular GATE text mining package with data drawn from social media, the use of text mining to support semantic search, the development of an authority system to support content tagging, and recent techniques in automatic language evaluation. Focused studies describe text mining on historical texts, automated indexing using constrained vocabularies, and the use of natural language processing to explore the climate science literature. Interviews are included that offer a glimpse into the real-life experience of working within commercial and academic text mining.
  13. Fonseca, F.; Marcinkowski, M.; Davis, C.: Cyber-human systems of thought and understanding (2019) 0.01
    0.01314147 = product of:
      0.05256588 = sum of:
        0.05256588 = sum of:
          0.020971177 = weight(_text_:research in 5011) [ClassicSimilarity], result of:
            0.020971177 = score(doc=5011,freq=2.0), product of:
              0.13306029 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046639 = queryNorm
              0.15760657 = fieldWeight in 5011, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5011)
          0.0315947 = weight(_text_:22 in 5011) [ClassicSimilarity], result of:
            0.0315947 = score(doc=5011,freq=2.0), product of:
              0.16332182 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046639 = queryNorm
              0.19345059 = fieldWeight in 5011, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5011)
      0.25 = coord(1/4)
    
    Abstract
    The present challenge faced by scientists working with Big Data comes in the overwhelming volume and level of detail provided by current data sets. Exceeding traditional empirical approaches, Big Data opens a new perspective on scientific work in which data comes to play a role in the development of the scientific problematic to be developed. Addressing this reconfiguration of our relationship with data through readings of Wittgenstein, Macherey, and Popper, we propose a picture of science that encourages scientists to engage with the data in a direct way, using the data itself as an instrument for scientific investigation. Using GIS as a theme, we develop the concept of cyber-human systems of thought and understanding to bridge the divide between representative (theoretical) thinking and (non-theoretical) data-driven science. At the foundation of these systems, we invoke the concept of the "semantic pixel" to establish a logical and virtual space linking data and the work of scientists. It is with this discussion of the relationship between analysts in their pursuit of knowledge and the rise of Big Data that this present discussion of the philosophical foundations of Big Data addresses the central questions raised by social informatics research.
    Date
    7. 3.2019 16:32:22
  14. Chowdhury, G.G.: Template mining for information extraction from digital documents (1999) 0.01
    0.011058145 = product of:
      0.04423258 = sum of:
        0.04423258 = product of:
          0.08846516 = sum of:
            0.08846516 = weight(_text_:22 in 4577) [ClassicSimilarity], result of:
              0.08846516 = score(doc=4577,freq=2.0), product of:
                0.16332182 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046639 = queryNorm
                0.5416616 = fieldWeight in 4577, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4577)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    2. 4.2000 18:01:22
  15. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.011006164 = product of:
      0.044024657 = sum of:
        0.044024657 = sum of:
          0.031386778 = weight(_text_:research in 1789) [ClassicSimilarity], result of:
            0.031386778 = score(doc=1789,freq=28.0), product of:
              0.13306029 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046639 = queryNorm
              0.23588389 = fieldWeight in 1789, product of:
                5.2915025 = tf(freq=28.0), with freq of:
                  28.0 = termFreq=28.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
          0.012637881 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.012637881 = score(doc=1789,freq=2.0), product of:
              0.16332182 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046639 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
      0.25 = coord(1/4)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  16. KDD : techniques and applications (1998) 0.01
    0.00947841 = product of:
      0.03791364 = sum of:
        0.03791364 = product of:
          0.07582728 = sum of:
            0.07582728 = weight(_text_:22 in 6783) [ClassicSimilarity], result of:
              0.07582728 = score(doc=6783,freq=2.0), product of:
                0.16332182 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046639 = queryNorm
                0.46428138 = fieldWeight in 6783, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6783)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    A special issue of selected papers from the Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD'97), held Singapore, 22-23 Feb 1997
  17. Tu, Y.-N.; Hsu, S.-L.: Constructing conceptual trajectory maps to trace the development of research fields (2016) 0.01
    0.0064210845 = product of:
      0.025684338 = sum of:
        0.025684338 = product of:
          0.051368676 = sum of:
            0.051368676 = weight(_text_:research in 3059) [ClassicSimilarity], result of:
              0.051368676 = score(doc=3059,freq=12.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.38605565 = fieldWeight in 3059, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3059)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This study proposes a new method to construct and trace the trajectory of conceptual development of a research field by combining main path analysis, citation analysis, and text-mining techniques. Main path analysis, a method used commonly to trace the most critical path in a citation network, helps describe the developmental trajectory of a research field. This study extends the main path analysis method and applies text-mining techniques in the new method, which reflects the trajectory of conceptual development in an academic research field more accurately than citation frequency, which represents only the articles examined. Articles can be merged based on similarity of concepts, and by merging concepts the history of a research field can be described more precisely. The new method was applied to the "h-index" and "text mining" fields. The precision, recall, and F-measures of the h-index were 0.738, 0.652, and 0.658 and those of text-mining were 0.501, 0.653, and 0.551, respectively. Last, this study not only establishes the conceptual trajectory map of a research field, but also recommends keywords that are more precise than those used currently by researchers. These precise keywords could enable researchers to gather related works more quickly than before.
  18. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.01
    0.0063189403 = product of:
      0.025275761 = sum of:
        0.025275761 = product of:
          0.050551523 = sum of:
            0.050551523 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.050551523 = score(doc=1737,freq=2.0), product of:
                0.16332182 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046639 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22.11.1998 18:57:22
  19. Lusti, M.: Data Warehousing and Data Mining : Eine Einführung in entscheidungsunterstützende Systeme (1999) 0.01
    0.0063189403 = product of:
      0.025275761 = sum of:
        0.025275761 = product of:
          0.050551523 = sum of:
            0.050551523 = weight(_text_:22 in 4261) [ClassicSimilarity], result of:
              0.050551523 = score(doc=4261,freq=2.0), product of:
                0.16332182 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046639 = queryNorm
                0.30952093 = fieldWeight in 4261, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4261)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    17. 7.2002 19:22:06
  20. Amir, A.; Feldman, R.; Kashi, R.: ¬A new and versatile method for association generation (1997) 0.01
    0.0063189403 = product of:
      0.025275761 = sum of:
        0.025275761 = product of:
          0.050551523 = sum of:
            0.050551523 = weight(_text_:22 in 1270) [ClassicSimilarity], result of:
              0.050551523 = score(doc=1270,freq=2.0), product of:
                0.16332182 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046639 = queryNorm
                0.30952093 = fieldWeight in 1270, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1270)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information systems. 22(1997) nos.5/6, S.333-347

Years

Languages

  • e 51
  • d 9

Types