Search (76 results, page 1 of 4)

  • × theme_ss:"Visualisierung"
  1. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.08
    0.07523431 = product of:
      0.15046862 = sum of:
        0.09790273 = weight(_text_:media in 1397) [ClassicSimilarity], result of:
          0.09790273 = score(doc=1397,freq=6.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.44816777 = fieldWeight in 1397, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.05256588 = sum of:
          0.020971177 = weight(_text_:research in 1397) [ClassicSimilarity], result of:
            0.020971177 = score(doc=1397,freq=2.0), product of:
              0.13306029 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046639 = queryNorm
              0.15760657 = fieldWeight in 1397, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1397)
          0.0315947 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
            0.0315947 = score(doc=1397,freq=2.0), product of:
              0.16332182 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046639 = queryNorm
              0.19345059 = fieldWeight in 1397, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1397)
      0.5 = coord(2/4)
    
    Abstract
    The "Screen Design Manual" provides designers of interactive media with a practical working guide for preparing and presenting information that is suitable for both their target groups and the media they are using. It describes background information and relationships, clarifies them with the help of examples, and encourages further development of the language of digital media. In addition to the basics of the psychology of perception and learning, ergonomics, communication theory, imagery research, and aesthetics, the book also explores the design of navigation and orientation elements. Guidelines and checklists, along with the unique presentation of the book, support the application of information in practice.
    Date
    22. 3.2008 14:29:25
  2. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.02
    0.023627056 = product of:
      0.09450822 = sum of:
        0.09450822 = sum of:
          0.06291352 = weight(_text_:research in 5272) [ClassicSimilarity], result of:
            0.06291352 = score(doc=5272,freq=18.0), product of:
              0.13306029 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046639 = queryNorm
              0.47281966 = fieldWeight in 5272, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5272)
          0.0315947 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
            0.0315947 = score(doc=5272,freq=2.0), product of:
              0.16332182 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046639 = queryNorm
              0.19345059 = fieldWeight in 5272, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5272)
      0.25 = coord(1/4)
    
    Abstract
    This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
    Date
    22. 7.2006 16:11:05
  3. Barton, P.: ¬A missed opportunity : why the benefits of information visualisation seem still out of sight (2005) 0.02
    0.01695725 = product of:
      0.067829 = sum of:
        0.067829 = weight(_text_:media in 1293) [ClassicSimilarity], result of:
          0.067829 = score(doc=1293,freq=2.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.31049973 = fieldWeight in 1293, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046875 = fieldNorm(doc=1293)
      0.25 = coord(1/4)
    
    Content
    Dissertation for MA degree in electronic media, School of Humanities, Oxford Brookes University
  4. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.02
    0.015769763 = product of:
      0.06307905 = sum of:
        0.06307905 = sum of:
          0.02516541 = weight(_text_:research in 1289) [ClassicSimilarity], result of:
            0.02516541 = score(doc=1289,freq=2.0), product of:
              0.13306029 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046639 = queryNorm
              0.18912788 = fieldWeight in 1289, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046875 = fieldNorm(doc=1289)
          0.03791364 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
            0.03791364 = score(doc=1289,freq=2.0), product of:
              0.16332182 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046639 = queryNorm
              0.23214069 = fieldWeight in 1289, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1289)
      0.25 = coord(1/4)
    
    Abstract
    QVIZ will research and create a framework for visualizing and querying archival resources by a time-space interface based on maps and emergent knowledge structures. The framework will also integrate social software, such as wikis, in order to utilize knowledge in existing and new communities of practice. QVIZ will lead to improved information sharing and knowledge creation, easier access to information in a user-adapted context and innovative ways of exploring and visualizing materials over time, between countries and other administrative units. The common European framework for sharing and accessing archival information provided by the QVIZ project will open a considerably larger commercial market based on archival materials as well as a richer understanding of European history.
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  5. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.02
    0.015769763 = product of:
      0.06307905 = sum of:
        0.06307905 = sum of:
          0.02516541 = weight(_text_:research in 3693) [ClassicSimilarity], result of:
            0.02516541 = score(doc=3693,freq=2.0), product of:
              0.13306029 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046639 = queryNorm
              0.18912788 = fieldWeight in 3693, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046875 = fieldNorm(doc=3693)
          0.03791364 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
            0.03791364 = score(doc=3693,freq=2.0), product of:
              0.16332182 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046639 = queryNorm
              0.23214069 = fieldWeight in 3693, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3693)
      0.25 = coord(1/4)
    
    Abstract
    Generally, Computer Science (CS) classifications are inconsistent in taxonomy strategies. t is necessary to develop CS taxonomy research to combine its historical perspective, its current knowledge and its predicted future trends - including all breakthroughs in information and communication technology. In this paper we have analyzed the ACM Computing Classification System (CCS) by means of visualization maps. The important achievement of current work is an effective visualization of classified documents from the ACM Digital Library. From the technical point of view, the innovation lies in the parallel use of analysis units: (sub)classes and keywords as well as a spherical 3D information surface. We have compared both the thematic and semantic maps of classified documents and results presented in Table 1. Furthermore, the proposed new method is used for content-related evaluation of the original scheme. Summing up: we improved an original ACM classification in the Computer Science domain by means of visualization.
    Date
    22. 7.2010 19:36:46
  6. Burnett, R.: How images think (2004) 0.01
    0.014954889 = product of:
      0.059819557 = sum of:
        0.059819557 = weight(_text_:media in 3884) [ClassicSimilarity], result of:
          0.059819557 = score(doc=3884,freq=14.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.27383503 = fieldWeight in 3884, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.015625 = fieldNorm(doc=3884)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: JASIST 56(2005) no.10, S.1126-1128 (P.K. Nayar): "How Images Think is an exercise both in philosophical meditation and critical theorizing about media, images, affects, and cognition. Burnett combines the insights of neuroscience with theories of cognition and the computer sciences. He argues that contemporary metaphors - biological or mechanical - about either cognition, images, or computer intelligence severely limit our understanding of the image. He suggests in his introduction that "image" refers to the "complex set of interactions that constitute everyday life in image-worlds" (p. xviii). For Burnett the fact that increasing amounts of intelligence are being programmed into technologies and devices that use images as their main form of interaction and communication-computers, for instance-suggests that images are interfaces, structuring interaction, people, and the environment they share. New technologies are not simply extensions of human abilities and needs-they literally enlarge cultural and social preconceptions of the relationship between body and mind. The flow of information today is part of a continuum, with exceptional events standing as punctuation marks. This flow connects a variety of sources, some of which are continuous - available 24 hours - or "live" and radically alters issues of memory and history. Television and the Internet, notes Burnett, are not simply a simulated world-they are the world, and the distinctions between "natural" and "non-natural" have disappeared. Increasingly, we immerse ourselves in the image, as if we are there. We rarely become conscious of the fact that we are watching images of events-for all perceptioe, cognitive, and interpretive purposes, the image is the event for us. The proximity and distance of viewer from/with the viewed has altered so significantly that the screen is us. However, this is not to suggest that we are simply passive consumers of images. As Burnett points out, painstakingly, issues of creativity are involved in the process of visualization-viewwes generate what they see in the images. This involves the historical moment of viewing-such as viewing images of the WTC bombings-and the act of re-imagining. As Burnett puts it, "the questions about what is pictured and what is real have to do with vantage points [of the viewer] and not necessarily what is in the image" (p. 26). In his second chapter Burnett moves an to a discussion of "imagescapes." Analyzing the analogue-digital programming of images, Burnett uses the concept of "reverie" to describe the viewing experience. The reverie is a "giving in" to the viewing experience, a "state" in which conscious ("I am sitting down an this sofa to watch TV") and unconscious (pleasure, pain, anxiety) processes interact. Meaning emerges in the not-always easy or "clean" process of hybridization. This "enhances" the thinking process beyond the boundaries of either image or subject. Hybridization is the space of intelligence, exchange, and communication.
    The sixth chapter looks at this interfacing of humans and machines and begins with a series of questions. The crucial one, to my mind, is this: "Does the distinction between humans and technology contribute to a lack of understanding of the continuous interrelationship and interdependence that exists between humans and all of their creations?" (p. 125) Burnett suggests that to use biological or mechanical views of the computer/mind (the computer as an input/output device) Limits our understanding of the ways in which we interact with machines. He thus points to the role of language, the conversations (including the one we held with machines when we were children) that seem to suggest a wholly different kind of relationship. Peer-to-peer communication (P2P), which is arguably the most widely used exchange mode of images today, is the subject of chapter seven. The issue here is whether P2P affects community building or community destruction. Burnett argues that the trope of community can be used to explore the flow of historical events that make up a continuum-from 17th-century letter writing to e-mail. In the new media-and Burnett uses the example of popular music which can be sampled, and reedited to create new compositions - the interpretive space is more flexible. Private networks can be set up, and the process of information retrieval (about which Burnett has already expended considerable space in the early chapters) involves a lot more of visualization. P2P networks, as Burnett points out, are about information management. They are about the harmony between machines and humans, and constitute a new ecology of communications. Turning to computer games, Burnett looks at the processes of interaction, experience, and reconstruction in simulated artificial life worlds, animations, and video images. For Burnett (like Andrew Darley, 2000 and Richard Doyle, 2003) the interactivity of the new media games suggests a greater degree of engagement with imageworlds. Today many facets of looking, listening, and gazing can be turned into aesthetic forms with the new media. Digital technology literally reanimates the world, as Burnett demonstrates in bis concluding chapter. Burnett concludes that images no longer simply represent the world-they shape our very interaction with it; they become the foundation for our understanding the spaces, places, and historical moments that we inhabit. Burnett concludes his book with the suggestion that intelligence is now a distributed phenomenon (here closely paralleling Katherine Hayles' argument that subjectivity is dispersed through the cybernetic circuit, 1999). There is no one center of information or knowledge. Intersections of human creativity, work, and connectivity "spread" (Burnett's term) "intelligence through the use of mediated devices and images, as well as sounds" (p. 221).
    Burnett's work is a useful basic primer an the new media. One of the chief attractions here is his clear language, devoid of the jargon of either computer sciences or advanced critical theory. This makes How Images Think an accessible introduction to digital cultures. Burnett explores the impact of the new technologies an not just image-making but an image-effects, and the ways in which images constitute our ecologies of identity, communication, and subject-hood. While some of the sections seem a little too basic (especially where he speaks about the ways in which we constitute an object as an object of art, see above), especially in the wake of reception theory, it still remains a starting point for those interested in cultural studies of the new media. The Gase Burnett makes out for the transformation of the ways in which we look at images has been strengthened by his attention to the history of this transformation-from photography through television and cinema and now to immersive virtual reality systems. Joseph Koemer (2004) has pointed out that the iconoclasm of early modern Europe actually demonstrates how idolatory was integral to the image-breakers' core belief. As Koerner puts it, "images never go away ... they persist and function by being perpetually destroyed" (p. 12). Burnett, likewise, argues that images in new media are reformed to suit new contexts of meaning-production-even when they appear to be destroyed. Images are recast, and the degree of their realism (or fantasy) heightened or diminished-but they do not "go away." Images do think, but-if I can parse Burnett's entire work-they think with, through, and in human intelligence, emotions, and intuitions. Images are uncanny-they are both us and not-us, ours and not-ours. There is, surprisingly, one factual error. Burnett claims that Myron Kreuger pioneered the term "virtual reality." To the best of my knowledge, it was Jaron Lanier who did so (see Featherstone & Burrows, 1998 [1995], p. 5)."
  7. Hook, P.A.; Gantchev, A.: Using combined metadata sources to visualize a small library (OBL's English Language Books) (2017) 0.01
    0.014131041 = product of:
      0.056524165 = sum of:
        0.056524165 = weight(_text_:media in 3870) [ClassicSimilarity], result of:
          0.056524165 = score(doc=3870,freq=2.0), product of:
            0.21845107 = queryWeight, product of:
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.046639 = queryNorm
            0.25874978 = fieldWeight in 3870, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6838713 = idf(docFreq=1110, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3870)
      0.25 = coord(1/4)
    
    Abstract
    Data from multiple knowledge organization systems are combined to provide a global overview of the content holdings of a small personal library. Subject headings and classification data are used to effectively map the combined book and topic space of the library. While harvested and manipulated by hand, the work reveals issues and potential solutions when using automated techniques to produce topic maps of much larger libraries. The small library visualized consists of the thirty-nine, digital, English language books found in the Osama Bin Laden (OBL) compound in Abbottabad, Pakistan upon his death. As this list of books has garnered considerable media attention, it is worth providing a visual overview of the subject content of these books - some of which is not readily apparent from the titles. Metadata from subject headings and classification numbers was combined to create book-subject maps. Tree maps of the classification data were also produced. The books contain 328 subject headings. In order to enhance the base map with meaningful thematic overlay, library holding count data was also harvested (and aggregated from duplicates). This additional data revealed the relative scarcity or popularity of individual books.
  8. Osinska, V.; Kowalska, M.; Osinski, Z.: ¬The role of visualization in the shaping and exploration of the individual information space : part 1 (2018) 0.01
    0.01314147 = product of:
      0.05256588 = sum of:
        0.05256588 = sum of:
          0.020971177 = weight(_text_:research in 4641) [ClassicSimilarity], result of:
            0.020971177 = score(doc=4641,freq=2.0), product of:
              0.13306029 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046639 = queryNorm
              0.15760657 = fieldWeight in 4641, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4641)
          0.0315947 = weight(_text_:22 in 4641) [ClassicSimilarity], result of:
            0.0315947 = score(doc=4641,freq=2.0), product of:
              0.16332182 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046639 = queryNorm
              0.19345059 = fieldWeight in 4641, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4641)
      0.25 = coord(1/4)
    
    Abstract
    Studies on the state and structure of digital knowledge concerning science generally relate to macro and meso scales. Supported by visualizations, these studies can deliver knowledge about emerging scientific fields or collaboration between countries, scientific centers, or groups of researchers. Analyses of individual activities or single scientific career paths are rarely presented and discussed. The authors decided to fill this gap and developed a web application for visualizing the scientific output of particular researchers. This free software based on bibliographic data from local databases, provides six layouts for analysis. Researchers can see the dynamic characteristics of their own writing activity, the time and place of publication, and the thematic scope of research problems. They can also identify cooperation networks, and consequently, study the dependencies and regularities in their own scientific activity. The current article presents the results of a study of the application's usability and functionality as well as attempts to define different user groups. A survey about the interface was sent to select researchers employed at Nicolaus Copernicus University. The results were used to answer the question as to whether such a specialized visualization tool can significantly augment the individual information space of the contemporary researcher.
    Date
    21.12.2018 17:22:13
  9. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.011006164 = product of:
      0.044024657 = sum of:
        0.044024657 = sum of:
          0.031386778 = weight(_text_:research in 1789) [ClassicSimilarity], result of:
            0.031386778 = score(doc=1789,freq=28.0), product of:
              0.13306029 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046639 = queryNorm
              0.23588389 = fieldWeight in 1789, product of:
                5.2915025 = tf(freq=28.0), with freq of:
                  28.0 = termFreq=28.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
          0.012637881 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.012637881 = score(doc=1789,freq=2.0), product of:
              0.16332182 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046639 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
      0.25 = coord(1/4)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  10. Batorowska, H.; Kaminska-Czubala, B.: Information retrieval support : visualisation of the information space of a document (2014) 0.01
    0.010513175 = product of:
      0.0420527 = sum of:
        0.0420527 = sum of:
          0.01677694 = weight(_text_:research in 1444) [ClassicSimilarity], result of:
            0.01677694 = score(doc=1444,freq=2.0), product of:
              0.13306029 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046639 = queryNorm
              0.12608525 = fieldWeight in 1444, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.03125 = fieldNorm(doc=1444)
          0.025275761 = weight(_text_:22 in 1444) [ClassicSimilarity], result of:
            0.025275761 = score(doc=1444,freq=2.0), product of:
              0.16332182 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046639 = queryNorm
              0.15476047 = fieldWeight in 1444, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1444)
      0.25 = coord(1/4)
    
    Abstract
    Acquiring knowledge in any field involves information retrieval, i.e. searching the available documents to identify answers to the queries concerning the selected objects. Knowing the keywords which are names of the objects will enable situating the user's query in the information space organized as a thesaurus or faceted classification. Objectives: Identification the areas in the information space which correspond to gaps in the user's personal knowledge or in the domain knowledge might become useful in theory or practice. The aim of this paper is to present a realistic information-space model of a self-authored full-text document on information culture, indexed by the author of this article. Methodology: Having established the relations between the terms, particular modules (sets of terms connected by relations used in facet classification) are situated on a plain, similarly to a communication map. Conclusions drawn from the "journey" on the map, which is a visualization of the knowledge contained in the analysed document, are the crucial part of this paper. Results: The direct result of the research is the created model of information space visualization of a given document (book, article, website). The proposed procedure can practically be used as a new form of representation in order to map the contents of academic books and articles, beside the traditional index form, especially as an e-book auxiliary tool. In teaching, visualization of the information space of a document can be used to help students understand the issues of: classification, categorization and representation of new knowledge emerging in human mind.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  11. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.01
    0.009187862 = product of:
      0.03675145 = sum of:
        0.03675145 = sum of:
          0.017794631 = weight(_text_:research in 3035) [ClassicSimilarity], result of:
            0.017794631 = score(doc=3035,freq=4.0), product of:
              0.13306029 = queryWeight, product of:
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.046639 = queryNorm
              0.1337336 = fieldWeight in 3035, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                2.8529835 = idf(docFreq=6931, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3035)
          0.01895682 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
            0.01895682 = score(doc=3035,freq=2.0), product of:
              0.16332182 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046639 = queryNorm
              0.116070345 = fieldWeight in 3035, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0234375 = fieldNorm(doc=3035)
      0.25 = coord(1/4)
    
    Content
    Bill Howe and his colleagues at the University of Washington, in Seattle, decided to find out. First, they trained a computer algorithm to distinguish between various sorts of figures-which they defined as diagrams, equations, photographs, plots (such as bar charts and scatter graphs) and tables. They exposed their algorithm to between 400 and 600 images of each of these types of figure until it could distinguish them with an accuracy greater than 90%. Then they set it loose on the more-than-650,000 papers (containing more than 10m figures) stored on PubMed Central, an online archive of biomedical-research articles. To measure each paper's influence, they calculated its article-level Eigenfactor score-a modified version of the PageRank algorithm Google uses to provide the most relevant results for internet searches. Eigenfactor scoring gives a better measure than simply noting the number of times a paper is cited elsewhere, because it weights citations by their influence. A citation in a paper that is itself highly cited is worth more than one in a paper that is not.
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
    Footnote
    Vgl.: http://www.economist.com/news/science-and-technology/21700620-surprisingly-simple-test-check-research-papers-errors-come-again.
  12. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.007898675 = product of:
      0.0315947 = sum of:
        0.0315947 = product of:
          0.0631894 = sum of:
            0.0631894 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.0631894 = score(doc=3406,freq=2.0), product of:
                0.16332182 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046639 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 5.2010 16:22:35
  13. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.01
    0.007898675 = product of:
      0.0315947 = sum of:
        0.0315947 = product of:
          0.0631894 = sum of:
            0.0631894 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.0631894 = score(doc=2755,freq=2.0), product of:
                0.16332182 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046639 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 2.2016 18:25:22
  14. Aris, A.; Shneiderman, B.; Qazvinian, V.; Radev, D.: Visual overviews for discovering key papers and influences across research fronts (2009) 0.01
    0.007705301 = product of:
      0.030821204 = sum of:
        0.030821204 = product of:
          0.06164241 = sum of:
            0.06164241 = weight(_text_:research in 3156) [ClassicSimilarity], result of:
              0.06164241 = score(doc=3156,freq=12.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.46326676 = fieldWeight in 3156, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3156)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Gaining a rapid overview of an emerging scientific topic, sometimes called research fronts, is an increasingly common task due to the growing amount of interdisciplinary collaboration. Visual overviews that show temporal patterns of paper publication and citation links among papers can help researchers and analysts to see the rate of growth of topics, identify key papers, and understand influences across subdisciplines. This article applies a novel network-visualization tool based on meaningful layouts of nodes to present research fronts and show citation links that indicate influences across research fronts. To demonstrate the value of two-dimensional layouts with multiple regions and user control of link visibility, we conducted a design-oriented, preliminary case study with 6 domain experts over a 4-month period. The main benefits were being able (a) to easily identify key papers and see the increasing number of papers within a research front, and (b) to quickly see the strength and direction of influence across related research fronts.
  15. Su, H.-N.: Visualization of global science and technology policy research structure (2012) 0.01
    0.007705301 = product of:
      0.030821204 = sum of:
        0.030821204 = product of:
          0.06164241 = sum of:
            0.06164241 = weight(_text_:research in 4969) [ClassicSimilarity], result of:
              0.06164241 = score(doc=4969,freq=12.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.46326676 = fieldWeight in 4969, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4969)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This study proposes an approach for visualizing knowledge structures that creates a "research-focused parallelship network," "keyword co-occurrence network," and a knowledge map to visualize Sci-Tech policy research structure. A total of 1,125 Sci-Tech policy-related papers (873 journal papers [78%], 205 conference papers [18%], and 47 review papers [4%]) have been retrieved from the Web of Science database for quantitative analysis and mapping. Different network and contour maps based on these 1,125 papers can be constructed by choosing different information as the main actor, such as the paper title, the institute, the country, or the author keywords, to reflect Sci-Tech policy research structures in micro-, meso-, and macro-levels, respectively. The quantitative way of exploring Sci-Tech policy research papers is investigated to unveil important or emerging Sci-Tech policy implications as well as to demonstrate the dynamics and visualization of the evolution of Sci-Tech policy research.
  16. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.006702249 = product of:
      0.026808996 = sum of:
        0.026808996 = product of:
          0.05361799 = sum of:
            0.05361799 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.05361799 = score(doc=3355,freq=4.0), product of:
                0.16332182 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046639 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  17. Collins, C.: WordNet explorer : applying visualization principles to lexical semantics (2006) 0.01
    0.005931544 = product of:
      0.023726176 = sum of:
        0.023726176 = product of:
          0.047452353 = sum of:
            0.047452353 = weight(_text_:research in 1288) [ClassicSimilarity], result of:
              0.047452353 = score(doc=1288,freq=4.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.35662293 = fieldWeight in 1288, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1288)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Interface designs for lexical databases in NLP have suffered from not following design principles developed in the information visualization research community. We present a design paradigm and show it can be used to generate visualizations which maximize the usability and utility ofWordNet. The techniques can be generally applied to other lexical databases used in NLP research.
  18. Trunk, D.: Semantische Netze in Informationssystemen : Verbesserung der Suche durch Interaktion und Visualisierung (2005) 0.01
    0.0055290726 = product of:
      0.02211629 = sum of:
        0.02211629 = product of:
          0.04423258 = sum of:
            0.04423258 = weight(_text_:22 in 2500) [ClassicSimilarity], result of:
              0.04423258 = score(doc=2500,freq=2.0), product of:
                0.16332182 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046639 = queryNorm
                0.2708308 = fieldWeight in 2500, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2500)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 1.2007 18:22:41
  19. Rafols, I.; Porter, A.L.; Leydesdorff, L.: Science overlay maps : a new tool for research policy and library management (2010) 0.01
    0.0054484713 = product of:
      0.021793885 = sum of:
        0.021793885 = product of:
          0.04358777 = sum of:
            0.04358777 = weight(_text_:research in 3987) [ClassicSimilarity], result of:
              0.04358777 = score(doc=3987,freq=6.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.3275791 = fieldWeight in 3987, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3987)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    We present a novel approach to visually locate bodies of research within the sciences, both at each moment of time and dynamically. This article describes how this approach fits with other efforts to locally and globally map scientific outputs. We then show how these science overlay maps help benchmarking, explore collaborations, and track temporal changes, using examples of universities, corporations, funding agencies, and research topics. We address their conditions of application and discuss advantages, downsides, and limitations. Overlay maps especially help investigate the increasing number of scientific developments and organizations that do not fit within traditional disciplinary categories. We make these tools available online to enable researchers to explore the ongoing sociocognitive transformations of science and technology systems.
  20. Braun, S.: Manifold: a custom analytics platform to visualize research impact (2015) 0.01
    0.0054484713 = product of:
      0.021793885 = sum of:
        0.021793885 = product of:
          0.04358777 = sum of:
            0.04358777 = weight(_text_:research in 2906) [ClassicSimilarity], result of:
              0.04358777 = score(doc=2906,freq=6.0), product of:
                0.13306029 = queryWeight, product of:
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046639 = queryNorm
                0.3275791 = fieldWeight in 2906, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  2.8529835 = idf(docFreq=6931, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2906)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The use of research impact metrics and analytics has become an integral component to many aspects of institutional assessment. Many platforms currently exist to provide such analytics, both proprietary and open source; however, the functionality of these systems may not always overlap to serve uniquely specific needs. In this paper, I describe a novel web-based platform, named Manifold, that I built to serve custom research impact assessment needs in the University of Minnesota Medical School. Built on a standard LAMP architecture, Manifold automatically pulls publication data for faculty from Scopus through APIs, calculates impact metrics through automated analytics, and dynamically generates report-like profiles that visualize those metrics. Work on this project has resulted in many lessons learned about challenges to sustainability and scalability in developing a system of such magnitude.

Years

Languages

  • e 71
  • d 4
  • a 1
  • More… Less…

Types

  • a 59
  • m 11
  • el 10
  • x 3
  • s 2
  • b 1
  • More… Less…

Subjects

Classifications