Search (82 results, page 1 of 5)

  • × theme_ss:"Data Mining"
  1. Chen, H.; Chau, M.: Web mining : machine learning for Web applications (2003) 0.11
    0.10549233 = product of:
      0.17582054 = sum of:
        0.060152818 = weight(_text_:wide in 4242) [ClassicSimilarity], result of:
          0.060152818 = score(doc=4242,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.29372054 = fieldWeight in 4242, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=4242)
        0.10319767 = weight(_text_:web in 4242) [ClassicSimilarity], result of:
          0.10319767 = score(doc=4242,freq=20.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.6841342 = fieldWeight in 4242, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4242)
        0.012470056 = product of:
          0.024940113 = sum of:
            0.024940113 = weight(_text_:research in 4242) [ClassicSimilarity], result of:
              0.024940113 = score(doc=4242,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.18912788 = fieldWeight in 4242, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4242)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    With more than two billion pages created by millions of Web page authors and organizations, the World Wide Web is a tremendously rich knowledge base. The knowledge comes not only from the content of the pages themselves, but also from the unique characteristics of the Web, such as its hyperlink structure and its diversity of content and languages. Analysis of these characteristics often reveals interesting patterns and new knowledge. Such knowledge can be used to improve users' efficiency and effectiveness in searching for information an the Web, and also for applications unrelated to the Web, such as support for decision making or business management. The Web's size and its unstructured and dynamic content, as well as its multilingual nature, make the extraction of useful knowledge a challenging research problem. Furthermore, the Web generates a large amount of data in other formats that contain valuable information. For example, Web server logs' information about user access patterns can be used for information personalization or improving Web page design.
  2. Liu, B.: Web data mining : exploring hyperlinks, contents, and usage data (2011) 0.09
    0.08608098 = product of:
      0.1434683 = sum of:
        0.056712627 = weight(_text_:wide in 354) [ClassicSimilarity], result of:
          0.056712627 = score(doc=354,freq=4.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.2769224 = fieldWeight in 354, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=354)
        0.0784423 = weight(_text_:web in 354) [ClassicSimilarity], result of:
          0.0784423 = score(doc=354,freq=26.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.520022 = fieldWeight in 354, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=354)
        0.008313371 = product of:
          0.016626742 = sum of:
            0.016626742 = weight(_text_:research in 354) [ClassicSimilarity], result of:
              0.016626742 = score(doc=354,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.12608525 = fieldWeight in 354, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.03125 = fieldNorm(doc=354)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Web mining aims to discover useful information and knowledge from the Web hyperlink structure, page contents, and usage data. Although Web mining uses many conventional data mining techniques, it is not purely an application of traditional data mining due to the semistructured and unstructured nature of the Web data and its heterogeneity. It has also developed many of its own algorithms and techniques. Liu has written a comprehensive text on Web data mining. Key topics of structure mining, content mining, and usage mining are covered both in breadth and in depth. His book brings together all the essential concepts and algorithms from related areas such as data mining, machine learning, and text processing to form an authoritative and coherent text. The book offers a rich blend of theory and practice, addressing seminal research ideas, as well as examining the technology from a practical point of view. It is suitable for students, researchers and practitioners interested in Web mining both as a learning text and a reference book. Lecturers can readily use it for classes on data mining, Web mining, and Web search. Additional teaching materials such as lecture slides, datasets, and implemented algorithms are available online.
    Content
    Inhalt: 1. Introduction 2. Association Rules and Sequential Patterns 3. Supervised Learning 4. Unsupervised Learning 5. Partially Supervised Learning 6. Information Retrieval and Web Search 7. Social Network Analysis 8. Web Crawling 9. Structured Data Extraction: Wrapper Generation 10. Information Integration
    RSWK
    World Wide Web / Data Mining
    Subject
    World Wide Web / Data Mining
  3. Baumgartner, R.: Methoden und Werkzeuge zur Webdatenextraktion (2006) 0.07
    0.06836396 = product of:
      0.1709099 = sum of:
        0.07017829 = weight(_text_:wide in 5808) [ClassicSimilarity], result of:
          0.07017829 = score(doc=5808,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.342674 = fieldWeight in 5808, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5808)
        0.100731604 = weight(_text_:web in 5808) [ClassicSimilarity], result of:
          0.100731604 = score(doc=5808,freq=14.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.6677857 = fieldWeight in 5808, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5808)
      0.4 = coord(2/5)
    
    Abstract
    Das World Wide Web kann als die größte uns bekannte "Datenbank" angesehen werden. Leider ist das heutige Web großteils auf die Präsentation für menschliche Benutzerinnen ausgelegt und besteht aus sehr heterogenen Datenbeständen. Überdies fehlen im Web die Möglichkeiten Informationen strukturiert und aus verschiedenen Quellen aggregiert abzufragen. Das heutige Web ist daher für die automatische maschinelle Verarbeitung nicht geeignet. Um Webdaten dennoch effektiv zu nutzen, wurden Sprachen, Methoden und Werkzeuge zur Extraktion und Aggregation dieser Daten entwickelt. Dieser Artikel gibt einen Überblick und eine Kategorisierung von verschiedenen Ansätzen zur Datenextraktion aus dem Web. Einige Beispielszenarien im B2B Datenaustausch, im Business Intelligence Bereich und insbesondere die Generierung von Daten für Semantic Web Ontologien illustrieren die effektive Nutzung dieser Technologien.
    Source
    Semantic Web: Wege zur vernetzten Wissensgesellschaft. Hrsg.: T. Pellegrini, u. A. Blumauer
  4. Classification, automation, and new media : Proceedings of the 24th Annual Conference of the Gesellschaft für Klassifikation e.V., University of Passau, March 15 - 17, 2000 (2002) 0.05
    0.047197547 = product of:
      0.11799386 = sum of:
        0.070890784 = weight(_text_:wide in 5997) [ClassicSimilarity], result of:
          0.070890784 = score(doc=5997,freq=4.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.34615302 = fieldWeight in 5997, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
        0.047103077 = weight(_text_:web in 5997) [ClassicSimilarity], result of:
          0.047103077 = score(doc=5997,freq=6.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.3122631 = fieldWeight in 5997, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5997)
      0.4 = coord(2/5)
    
    Content
    Data Analysis, Statistics, and Classification.- Pattern Recognition and Automation.- Data Mining, Information Processing, and Automation.- New Media, Web Mining, and Automation.- Applications in Management Science, Finance, and Marketing.- Applications in Medicine, Biology, Archaeology, and Others.- Author Index.- Subject Index.
    RSWK
    World Wide Web / Wissensorganisation / Kongress / Passau <2000>
    Subject
    World Wide Web / Wissensorganisation / Kongress / Passau <2000>
  5. Liu, Y.; Zhang, M.; Cen, R.; Ru, L.; Ma, S.: Data cleansing for Web information retrieval using query independent features (2007) 0.05
    0.04628696 = product of:
      0.115717396 = sum of:
        0.105325684 = weight(_text_:web in 607) [ClassicSimilarity], result of:
          0.105325684 = score(doc=607,freq=30.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.69824153 = fieldWeight in 607, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=607)
        0.010391714 = product of:
          0.020783428 = sum of:
            0.020783428 = weight(_text_:research in 607) [ClassicSimilarity], result of:
              0.020783428 = score(doc=607,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.15760657 = fieldWeight in 607, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=607)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Understanding what kinds of Web pages are the most useful for Web search engine users is a critical task in Web information retrieval (IR). Most previous works used hyperlink analysis algorithms to solve this problem. However, little research has been focused on query-independent Web data cleansing for Web IR. In this paper, we first provide analysis of the differences between retrieval target pages and ordinary ones based on more than 30 million Web pages obtained from both the Text Retrieval Conference (TREC) and a widely used Chinese search engine, SOGOU (www.sogou.com). We further propose a learning-based data cleansing algorithm for reducing Web pages that are unlikely to be useful for user requests. We found that there exists a large proportion of low-quality Web pages in both the English and the Chinese Web page corpus, and retrieval target pages can be identified using query-independent features and cleansing algorithms. The experimental results showed that our algorithm is effective in reducing a large portion of Web pages with a small loss in retrieval target pages. It makes it possible for Web IR tools to meet a large fraction of users' needs with only a small part of pages on the Web. These results may help Web search engines make better use of their limited storage and computation resources to improve search performance.
    Footnote
    Beitrag eines Themenschwerpunktes "Mining Web resources for enhancing information retrieval"
  6. Chakrabarti, S.: Mining the Web : discovering knowledge from hypertext data (2003) 0.04
    0.04490332 = product of:
      0.1122583 = sum of:
        0.040101882 = weight(_text_:wide in 2222) [ClassicSimilarity], result of:
          0.040101882 = score(doc=2222,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.1958137 = fieldWeight in 2222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2222)
        0.07215642 = weight(_text_:web in 2222) [ClassicSimilarity], result of:
          0.07215642 = score(doc=2222,freq=22.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.47835067 = fieldWeight in 2222, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2222)
      0.4 = coord(2/5)
    
    Footnote
    Rez. in: JASIST 55(2004) no.3, S.275-276 (C. Chen): "This is a book about finding significant statistical patterns on the Web - in particular, patterns that are associated with hypertext documents, topics, hyperlinks, and queries. The term pattern in this book refers to dependencies among such items. On the one hand, the Web contains useful information an just about every topic under the sun. On the other hand, just like searching for a needle in a haystack, one would need powerful tools to locate useful information an the vast land of the Web. Soumen Chakrabarti's book focuses an a wide range of techniques for machine learning and data mining an the Web. The goal of the book is to provide both the technical Background and tools and tricks of the trade of Web content mining. Much of the technical content reflects the state of the art between 1995 and 2002. The targeted audience is researchers and innovative developers in this area, as well as newcomers who intend to enter this area. The book begins with an introduction chapter. The introduction chapter explains fundamental concepts such as crawling and indexing as well as clustering and classification. The remaining eight chapters are organized into three parts: i) infrastructure, ii) learning and iii) applications.
    Part I, Infrastructure, has two chapters: Chapter 2 on crawling the Web and Chapter 3 an Web search and information retrieval. The second part of the book, containing chapters 4, 5, and 6, is the centerpiece. This part specifically focuses an machine learning in the context of hypertext. Part III is a collection of applications that utilize the techniques described in earlier chapters. Chapter 7 is an social network analysis. Chapter 8 is an resource discovery. Chapter 9 is an the future of Web mining. Overall, this is a valuable reference book for researchers and developers in the field of Web mining. It should be particularly useful for those who would like to design and probably code their own Computer programs out of the equations and pseudocodes an most of the pages. For a student, the most valuable feature of the book is perhaps the formal and consistent treatments of concepts across the board. For what is behind and beyond the technical details, one has to either dig deeper into the bibliographic notes at the end of each chapter, or resort to more in-depth analysis of relevant subjects in the literature. lf you are looking for successful stories about Web mining or hard-way-learned lessons of failures, this is not the book."
  7. Mining text data (2012) 0.04
    0.04416885 = product of:
      0.07361475 = sum of:
        0.040101882 = weight(_text_:wide in 362) [ClassicSimilarity], result of:
          0.040101882 = score(doc=362,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.1958137 = fieldWeight in 362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.03125 = fieldNorm(doc=362)
        0.02175598 = weight(_text_:web in 362) [ClassicSimilarity], result of:
          0.02175598 = score(doc=362,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.14422815 = fieldWeight in 362, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=362)
        0.011756882 = product of:
          0.023513764 = sum of:
            0.023513764 = weight(_text_:research in 362) [ClassicSimilarity], result of:
              0.023513764 = score(doc=362,freq=4.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.17831147 = fieldWeight in 362, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.03125 = fieldNorm(doc=362)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Text mining applications have experienced tremendous advances because of web 2.0 and social networking applications. Recent advances in hardware and software technology have lead to a number of unique scenarios where text mining algorithms are learned. Mining Text Data introduces an important niche in the text analytics field, and is an edited volume contributed by leading international researchers and practitioners focused on social networks & data mining. This book contains a wide swath in topics across social networks & data mining. Each chapter contains a comprehensive survey including the key research content on the topic, and the future directions of research in the field. There is a special focus on Text Embedded with Heterogeneous and Multimedia Data which makes the mining process much more challenging. A number of methods have been designed such as transfer learning and cross-lingual mining for such cases. Mining Text Data simplifies the content, so that advanced-level students, practitioners and researchers in computer science can benefit from this book. Academic and corporate libraries, as well as ACM, IEEE, and Management Science focused on information security, electronic commerce, databases, data mining, machine learning, and statistics are the primary buyers for this reference book.
  8. Schwartz, D.: Graphische Datenanalyse für digitale Bibliotheken : Leistungs- und Funktionsumfang moderner Analyse- und Visualisierungsinstrumente (2006) 0.04
    0.043300506 = product of:
      0.10825126 = sum of:
        0.07017829 = weight(_text_:wide in 30) [ClassicSimilarity], result of:
          0.07017829 = score(doc=30,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.342674 = fieldWeight in 30, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=30)
        0.038072966 = weight(_text_:web in 30) [ClassicSimilarity], result of:
          0.038072966 = score(doc=30,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.25239927 = fieldWeight in 30, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=30)
      0.4 = coord(2/5)
    
    Abstract
    Das World Wide Web stellt umfangreiche Datenmengen zur Verfügung. Für den Benutzer wird es zunehmend schwieriger, diese Datenmengen zu sichten, zu bewerten und die relevanten Daten herauszufiltern. Einen Lösungsansatz für diese Problemstellung bieten Visualisierungsinstrumente, mit deren Hilfe Rechercheergebnisse nicht mehr ausschließlich über textbasierte Dokumentenlisten, sondern über Symbole, Icons oder graphische Elemente dargestellt werden. Durch geeignete Visualisierungstechniken können Informationsstrukturen in großen Datenmengen aufgezeigt werden. Informationsvisualisierung ist damit ein Instrument, um Rechercheergebnisse in einer digitalen Bibliothek zu strukturieren und relevante Daten für den Benutzer leichter auffindbar zu machen.
  9. Kulathuramaiyer, N.; Maurer, H.: Implications of emerging data mining (2009) 0.04
    0.04252169 = product of:
      0.10630422 = sum of:
        0.060152818 = weight(_text_:wide in 3144) [ClassicSimilarity], result of:
          0.060152818 = score(doc=3144,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.29372054 = fieldWeight in 3144, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=3144)
        0.046151403 = weight(_text_:web in 3144) [ClassicSimilarity], result of:
          0.046151403 = score(doc=3144,freq=4.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.3059541 = fieldWeight in 3144, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3144)
      0.4 = coord(2/5)
    
    Abstract
    Data Mining describes a technology that discovers non-trivial hidden patterns in a large collection of data. Although this technology has a tremendous impact on our lives, the invaluable contributions of this invisible technology often go unnoticed. This paper discusses advances in data mining while focusing on the emerging data mining capability. Such data mining applications perform multidimensional mining on a wide variety of heterogeneous data sources, providing solutions to many unresolved problems. This paper also highlights the advantages and disadvantages arising from the ever-expanding scope of data mining. Data Mining augments human intelligence by equipping us with a wealth of knowledge and by empowering us to perform our daily tasks better. As the mining scope and capacity increases, users and organizations become more willing to compromise privacy. The huge data stores of the 'master miners' allow them to gain deep insights into individual lifestyles and their social and behavioural patterns. Data integration and analysis capability of combining business and financial trends together with the ability to deterministically track market changes will drastically affect our lives.
    Source
    Social Semantic Web: Web 2.0, was nun? Hrsg.: A. Blumauer u. T. Pellegrini
  10. Lihui, C.; Lian, C.W.: Using Web structure and summarisation techniques for Web content mining (2005) 0.03
    0.034924287 = product of:
      0.08731072 = sum of:
        0.076919004 = weight(_text_:web in 1046) [ClassicSimilarity], result of:
          0.076919004 = score(doc=1046,freq=16.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.5099235 = fieldWeight in 1046, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1046)
        0.010391714 = product of:
          0.020783428 = sum of:
            0.020783428 = weight(_text_:research in 1046) [ClassicSimilarity], result of:
              0.020783428 = score(doc=1046,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.15760657 = fieldWeight in 1046, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1046)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The dynamic nature and size of the Internet can result in difficulty finding relevant information. Most users typically express their information need via short queries to search engines and they often have to physically sift through the search results based on relevance ranking set by the search engines, making the process of relevance judgement time-consuming. In this paper, we describe a novel representation technique which makes use of the Web structure together with summarisation techniques to better represent knowledge in actual Web Documents. We named the proposed technique as Semantic Virtual Document (SVD). We will discuss how the proposed SVD can be used together with a suitable clustering algorithm to achieve an automatic content-based categorization of similar Web Documents. The auto-categorization facility as well as a "Tree-like" Graphical User Interface (GUI) for post-retrieval document browsing enhances the relevance judgement process for Internet users. Furthermore, we will introduce how our cluster-biased automatic query expansion technique can be used to overcome the ambiguity of short queries typically given by users. We will outline our experimental design to evaluate the effectiveness of the proposed SVD for representation and present a prototype called iSEARCH (Intelligent SEarch And Review of Cluster Hierarchy) for Web content mining. Our results confirm, quantify and extend previous research using Web structure and summarisation techniques, introducing novel techniques for knowledge representation to enhance Web content mining.
  11. Medien-Informationsmanagement : Archivarische, dokumentarische, betriebswirtschaftliche, rechtliche und Berufsbild-Aspekte ; [Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar und Folgetagung 2001 in Köln] (2003) 0.03
    0.03347217 = product of:
      0.05578695 = sum of:
        0.030076409 = weight(_text_:wide in 1833) [ClassicSimilarity], result of:
          0.030076409 = score(doc=1833,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.14686027 = fieldWeight in 1833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.016316984 = weight(_text_:web in 1833) [ClassicSimilarity], result of:
          0.016316984 = score(doc=1833,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.108171105 = fieldWeight in 1833, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1833)
        0.009393553 = product of:
          0.018787106 = sum of:
            0.018787106 = weight(_text_:22 in 1833) [ClassicSimilarity], result of:
              0.018787106 = score(doc=1833,freq=2.0), product of:
                0.16185966 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046221454 = queryNorm
                0.116070345 = fieldWeight in 1833, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1833)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Als in den siebziger Jahren des vergangenen Jahrhunderts immer häufiger die Bezeichnung Informationsmanager für Leute propagiert wurde, die bis dahin als Dokumentare firmierten, wurde dies in den etablierten Kreisen der Archivare und Bibliothekare gelegentlich belächelt und als Zeichen einer Identitätskrise oder jedenfalls einer Verunsicherung des damit überschriebenen Berufsbilds gewertet. Für den Berufsstand der Medienarchivare/Mediendokumentare, die sich seit 1960 in der Fachgruppe 7 des Vereins, später Verbands deutscher Archivare (VdA) organisieren, gehörte diese Verortung im Zeichen neuer inhaltlicher Herausforderungen (Informationsflut) und Technologien (EDV) allerdings schon früh zu den Selbstverständlichkeiten des Berufsalltags. "Halt, ohne uns geht es nicht!" lautete die Überschrift eines Artikels im Verbandsorgan "Info 7", der sich mit der Einrichtung von immer mächtigeren Leitungsnetzen und immer schnelleren Datenautobahnen beschäftigte. Information, Informationsgesellschaft: diese Begriffe wurden damals fast nur im technischen Sinne verstanden. Die informatisierte, nicht die informierte Gesellschaft stand im Vordergrund - was wiederum Kritiker auf den Plan rief, von Joseph Weizenbaum in den USA bis hin zu den Informations-Ökologen in Bremen. Bei den nationalen, manchmal auch nur regionalen Projekten und Modellversuchen mit Datenautobahnen - auch beim frühen Btx - war nie so recht deutlich geworden, welche Inhalte in welcher Gestalt durch diese Netze und Straßen gejagt werden sollten und wer diese Inhalte eigentlich selektieren, portionieren, positionieren, kurz: managen sollte. Spätestens mit dem World Wide Web sind diese Projekte denn auch obsolet geworden, jedenfalls was die Hardware und Software anging. Geblieben ist das Thema Inhalte (neudeutsch: Content). Und - immer drängender im nicht nur technischen Verständnis - das Thema Informationsmanagement. MedienInformationsManagement war die Frühjahrstagung der Fachgruppe 7 im Jahr 2000 in Weimar überschrieben, und auch die Folgetagung 2001 in Köln, die der multimedialen Produktion einen dokumentarischen Pragmatismus gegenüber stellte, handelte vom Geschäftsfeld Content und von Content-Management-Systemen. Die in diesem 6. Band der Reihe Beiträge zur Mediendokumentation versammelten Vorträge und Diskussionsbeiträge auf diesen beiden Tagungen beleuchten das Titel-Thema aus den verschiedensten Blickwinkeln: archivarischen, dokumentarischen, kaufmännischen, berufsständischen und juristischen. Deutlich wird dabei, daß die Berufsbezeichnung Medienarchivarln/Mediendokumentarln ziemlich genau für all das steht, was heute mit sog. alten wie neuen Medien im organisatorischen, d.h. ordnenden und vermittelnden Sinne geschieht. Im besonderen Maße trifft dies auf das Internet und die aus ihm geborenen Intranets zu. Beide bedürfen genauso der ordnenden Hand, die sich an den alten Medien, an Buch, Zeitung, Tonträger, Film etc. geschult hat, denn sie leben zu großen Teilen davon. Daß das Internet gleichwohl ein Medium sui generis ist und die alten Informationsberufe vor ganz neue Herausforderungen stellt - auch das durchzieht die Beiträge von Weimar und Köln.
    Date
    11. 5.2008 19:49:22
  12. Huvila, I.: Mining qualitative data on human information behaviour from the Web (2010) 0.03
    0.032197084 = product of:
      0.08049271 = sum of:
        0.065944314 = weight(_text_:web in 4676) [ClassicSimilarity], result of:
          0.065944314 = score(doc=4676,freq=6.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.43716836 = fieldWeight in 4676, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4676)
        0.014548399 = product of:
          0.029096797 = sum of:
            0.029096797 = weight(_text_:research in 4676) [ClassicSimilarity], result of:
              0.029096797 = score(doc=4676,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.22064918 = fieldWeight in 4676, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4676)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper discusses an approach of collecting qualitative data on human information behaviour that is based on mining web data using search engines. The approach is technically the same that has been used for some time in webometric research to make statistical inferences on web data, but the present paper shows how the same tools and data collecting methods can be used to gather data for qualitative data analysis on human information behaviour.
  13. Li, J.; Zhang, P.; Cao, J.: External concept support for group support systems through Web mining (2009) 0.03
    0.031095197 = product of:
      0.077737994 = sum of:
        0.065267935 = weight(_text_:web in 2806) [ClassicSimilarity], result of:
          0.065267935 = score(doc=2806,freq=8.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.43268442 = fieldWeight in 2806, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2806)
        0.012470056 = product of:
          0.024940113 = sum of:
            0.024940113 = weight(_text_:research in 2806) [ClassicSimilarity], result of:
              0.024940113 = score(doc=2806,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.18912788 = fieldWeight in 2806, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2806)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    External information plays an important role in group decision-making processes, yet research about external information support for Group Support Systems (GSS) has been lacking. In this study, we propose an approach to build a concept space to provide external concept support for GSS users. Built on a Web mining algorithm, the approach can mine a concept space from the Web and retrieve related concepts from the concept space based on users' comments in a real-time manner. We conduct two experiments to evaluate the quality of the proposed approach and the effectiveness of the external concept support provided by this approach. The experiment results indicate that the concept space mined from the Web contained qualified concepts to stimulate divergent thinking. The results also demonstrate that external concept support in GSS greatly enhanced group productivity for idea generation tasks.
  14. Wei, C.-P.; Lee, Y.-H.; Chiang, Y.-S.; Chen, C.-T.; Yang, C.C.C.: Exploiting temporal characteristics of features for effectively discovering event episodes from news corpora (2014) 0.03
    0.03092893 = product of:
      0.07732233 = sum of:
        0.05012735 = weight(_text_:wide in 1225) [ClassicSimilarity], result of:
          0.05012735 = score(doc=1225,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.24476713 = fieldWeight in 1225, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1225)
        0.027194975 = weight(_text_:web in 1225) [ClassicSimilarity], result of:
          0.027194975 = score(doc=1225,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.18028519 = fieldWeight in 1225, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1225)
      0.4 = coord(2/5)
    
    Abstract
    An organization performing environmental scanning generally monitors or tracks various events concerning its external environment. One of the major resources for environmental scanning is online news documents, which are readily accessible on news websites or infomediaries. However, the proliferation of the World Wide Web, which increases information sources and improves information circulation, has vastly expanded the amount of information to be scanned. Thus, it is essential to develop an effective event episode discovery mechanism to organize news documents pertaining to an event of interest. In this study, we propose two new metrics, Term Frequency × Inverse Document FrequencyTempo (TF×IDFTempo) and TF×Enhanced-IDFTempo, and develop a temporal-based event episode discovery (TEED) technique that uses the proposed metrics for feature selection and document representation. Using a traditional TF×IDF-based hierarchical agglomerative clustering technique as a performance benchmark, our empirical evaluation reveals that the proposed TEED technique outperforms its benchmark, as measured by cluster recall and cluster precision. In addition, the use of TF×Enhanced-IDFTempo significantly improves the effectiveness of event episode discovery when compared with the use of TF×IDFTempo.
  15. Nicholson, S.: Bibliomining for automated collection development in a digital library setting : using data mining to discover Web-based scholarly research works (2003) 0.03
    0.030069351 = product of:
      0.07517338 = sum of:
        0.05438995 = weight(_text_:web in 1867) [ClassicSimilarity], result of:
          0.05438995 = score(doc=1867,freq=8.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.36057037 = fieldWeight in 1867, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1867)
        0.020783428 = product of:
          0.041566856 = sum of:
            0.041566856 = weight(_text_:research in 1867) [ClassicSimilarity], result of:
              0.041566856 = score(doc=1867,freq=8.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.31521314 = fieldWeight in 1867, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1867)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This research creates an intelligent agent for automated collection development in a digital library setting. It uses a predictive model based an facets of each Web page to select scholarly works. The criteria came from the academic library selection literature, and a Delphi study was used to refine the list to 41 criteria. A Perl program was designed to analyze a Web page for each criterion and applied to a large collection of scholarly and nonscholarly Web pages. Bibliomining, or data mining for libraries, was then used to create different classification models. Four techniques were used: logistic regression, nonparametric discriminant analysis, classification trees, and neural networks. Accuracy and return were used to judge the effectiveness of each model an test datasets. In addition, a set of problematic pages that were difficult to classify because of their similarity to scholarly research was gathered and classified using the models. The resulting models could be used in the selection process to automatically create a digital library of Webbased scholarly research works. In addition, the technique can be extended to create a digital library of any type of structured electronic information.
  16. Klein, H.: Web Content Mining (2004) 0.03
    0.02793943 = product of:
      0.069848575 = sum of:
        0.061535202 = weight(_text_:web in 3154) [ClassicSimilarity], result of:
          0.061535202 = score(doc=3154,freq=16.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.4079388 = fieldWeight in 3154, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3154)
        0.008313371 = product of:
          0.016626742 = sum of:
            0.016626742 = weight(_text_:research in 3154) [ClassicSimilarity], result of:
              0.016626742 = score(doc=3154,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.12608525 = fieldWeight in 3154, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3154)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Web Mining - ein Schlagwort, das mit der Verbreitung des Internets immer öfter zu lesen und zu hören ist. Die gegenwärtige Forschung beschäftigt sich aber eher mit dem Nutzungsverhalten der Internetnutzer, und ein Blick in Tagungsprogramme einschlägiger Konferenzen (z.B. GOR - German Online Research) zeigt, dass die Analyse der Inhalte kaum Thema ist. Auf der GOR wurden 1999 zwei Vorträge zu diesem Thema gehalten, auf der Folgekonferenz 2001 kein einziger. Web Mining ist der Oberbegriff für zwei Typen von Web Mining: Web Usage Mining und Web Content Mining. Unter Web Usage Mining versteht man das Analysieren von Daten, wie sie bei der Nutzung des WWW anfallen und von den Servern protokolliert wenden. Man kann ermitteln, welche Seiten wie oft aufgerufen wurden, wie lange auf den Seiten verweilt wurde und vieles andere mehr. Beim Web Content Mining wird der Inhalt der Webseiten untersucht, der nicht nur Text, sondern auf Bilder, Video- und Audioinhalte enthalten kann. Die Software für die Analyse von Webseiten ist in den Grundzügen vorhanden, doch müssen die meisten Webseiten für die entsprechende Analysesoftware erst aufbereitet werden. Zuerst müssen die relevanten Websites ermittelt werden, die die gesuchten Inhalte enthalten. Das geschieht meist mit Suchmaschinen, von denen es mittlerweile Hunderte gibt. Allerdings kann man nicht davon ausgehen, dass die Suchmaschinen alle existierende Webseiten erfassen. Das ist unmöglich, denn durch das schnelle Wachstum des Internets kommen täglich Tausende von Webseiten hinzu, und bereits bestehende ändern sich der werden gelöscht. Oft weiß man auch nicht, wie die Suchmaschinen arbeiten, denn das gehört zu den Geschäftsgeheimnissen der Betreiber. Man muss also davon ausgehen, dass die Suchmaschinen nicht alle relevanten Websites finden (können). Der nächste Schritt ist das Herunterladen der Websites, dafür gibt es Software, die unter den Bezeichnungen OfflineReader oder Webspider zu finden ist. Das Ziel dieser Programme ist, die Website in einer Form herunterzuladen, die es erlaubt, die Website offline zu betrachten. Die Struktur der Website wird in der Regel beibehalten. Wer die Inhalte einer Website analysieren will, muss also alle Dateien mit seiner Analysesoftware verarbeiten können. Software für Inhaltsanalyse geht davon aus, dass nur Textinformationen in einer einzigen Datei verarbeitet werden. QDA Software (qualitative data analysis) verarbeitet dagegen auch Audiound Videoinhalte sowie internetspezifische Kommunikation wie z.B. Chats.
  17. Matson, L.D.; Bonski, D.J.: Do digital libraries need librarians? (1997) 0.03
    0.027424574 = product of:
      0.068561435 = sum of:
        0.04351196 = weight(_text_:web in 1737) [ClassicSimilarity], result of:
          0.04351196 = score(doc=1737,freq=2.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.2884563 = fieldWeight in 1737, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1737)
        0.025049476 = product of:
          0.050098952 = sum of:
            0.050098952 = weight(_text_:22 in 1737) [ClassicSimilarity], result of:
              0.050098952 = score(doc=1737,freq=2.0), product of:
                0.16185966 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046221454 = queryNorm
                0.30952093 = fieldWeight in 1737, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1737)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Defines digital libraries and discusses the effects of new technology on librarians. Examines the different viewpoints of librarians and information technologists on digital libraries. Describes the development of a digital library at the National Drug Intelligence Center, USA, which was carried out in collaboration with information technology experts. The system is based on Web enabled search technology to find information, data visualization and data mining to visualize it and use of SGML as an information standard to store it
    Date
    22.11.1998 18:57:22
  18. Information visualization in data mining and knowledge discovery (2002) 0.03
    0.025472585 = product of:
      0.06368146 = sum of:
        0.020050941 = weight(_text_:wide in 1789) [ClassicSimilarity], result of:
          0.020050941 = score(doc=1789,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.09790685 = fieldWeight in 1789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
        0.04363052 = sum of:
          0.031105785 = weight(_text_:research in 1789) [ClassicSimilarity], result of:
            0.031105785 = score(doc=1789,freq=28.0), product of:
              0.13186905 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046221454 = queryNorm
              0.23588389 = fieldWeight in 1789, product of:
                5.2915025 = tf(freq=28.0), with freq of:
                  28.0 = termFreq=28.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
          0.012524738 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.012524738 = score(doc=1789,freq=2.0), product of:
              0.16185966 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046221454 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
      0.4 = coord(2/5)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  19. Vaughan, L.; Chen, Y.: Data mining from web search queries : a comparison of Google trends and Baidu index (2015) 0.03
    0.025103599 = product of:
      0.062759 = sum of:
        0.047103077 = weight(_text_:web in 1605) [ClassicSimilarity], result of:
          0.047103077 = score(doc=1605,freq=6.0), product of:
            0.1508442 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046221454 = queryNorm
            0.3122631 = fieldWeight in 1605, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1605)
        0.015655924 = product of:
          0.031311847 = sum of:
            0.031311847 = weight(_text_:22 in 1605) [ClassicSimilarity], result of:
              0.031311847 = score(doc=1605,freq=2.0), product of:
                0.16185966 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046221454 = queryNorm
                0.19345059 = fieldWeight in 1605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1605)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Numerous studies have explored the possibility of uncovering information from web search queries but few have examined the factors that affect web query data sources. We conducted a study that investigated this issue by comparing Google Trends and Baidu Index. Data from these two services are based on queries entered by users into Google and Baidu, two of the largest search engines in the world. We first compared the features and functions of the two services based on documents and extensive testing. We then carried out an empirical study that collected query volume data from the two sources. We found that data from both sources could be used to predict the quality of Chinese universities and companies. Despite the differences between the two services in terms of technology, such as differing methods of language processing, the search volume data from the two were highly correlated and combining the two data sources did not improve the predictive power of the data. However, there was a major difference between the two in terms of data availability. Baidu Index was able to provide more search volume data than Google Trends did. Our analysis showed that the disadvantage of Google Trends in this regard was due to Google's smaller user base in China. The implication of this finding goes beyond China. Google's user bases in many countries are smaller than that in China, so the search volume data related to those countries could result in the same issue as that related to China.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.1, S.13-22
  20. Maaten, L. van den; Hinton, G.: Visualizing data using t-SNE (2008) 0.02
    0.024207626 = product of:
      0.060519062 = sum of:
        0.05012735 = weight(_text_:wide in 3888) [ClassicSimilarity], result of:
          0.05012735 = score(doc=3888,freq=2.0), product of:
            0.20479609 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046221454 = queryNorm
            0.24476713 = fieldWeight in 3888, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3888)
        0.010391714 = product of:
          0.020783428 = sum of:
            0.020783428 = weight(_text_:research in 3888) [ClassicSimilarity], result of:
              0.020783428 = score(doc=3888,freq=2.0), product of:
                0.13186905 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046221454 = queryNorm
                0.15760657 = fieldWeight in 3888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3888)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We present a new technique called "t-SNE" that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large data sets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of data sets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the data sets.
    Source
    Journal of machine learning research. 9(2008), S.2579-2605

Years

Languages

  • e 69
  • d 13

Types