Search (9 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Visualisierung"
  • × year_i:[2000 TO 2010}
  1. Maaten, L. van den; Hinton, G.: Visualizing data using t-SNE (2008) 0.03
    0.02553383 = product of:
      0.10213532 = sum of:
        0.10213532 = weight(_text_:objects in 3888) [ClassicSimilarity], result of:
          0.10213532 = score(doc=3888,freq=2.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.29361898 = fieldWeight in 3888, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3888)
      0.25 = coord(1/4)
    
    Abstract
    We present a new technique called "t-SNE" that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large data sets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of data sets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the data sets.
  2. Zhu, B.; Chen, H.: Information visualization (2004) 0.03
    0.025277203 = product of:
      0.10110881 = sum of:
        0.10110881 = weight(_text_:objects in 4276) [ClassicSimilarity], result of:
          0.10110881 = score(doc=4276,freq=4.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.29066795 = fieldWeight in 4276, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
      0.25 = coord(1/4)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
    Visualization can be classified as scientific visualization, software visualization, or information visualization. Although the data differ, the underlying techniques have much in common. They use the same elements (visual cues) and follow the same rules of combining visual cues to deliver patterns. They all involve understanding human perception (Encarnacao, Foley, Bryson, & Feiner, 1994) and require domain knowledge (Tufte, 1990). Because most decisions are based an unstructured information, such as text documents, Web pages, or e-mail messages, this chapter focuses an the visualization of unstructured textual documents. The chapter reviews information visualization techniques developed over the last decade and examines how they have been applied in different domains. The first section provides the background by describing visualization history and giving overviews of scientific, software, and information visualization as well as the perceptual aspects of visualization. The next section assesses important visualization techniques that convert abstract information into visual objects and facilitate navigation through displays an a computer screen. It also explores information analysis algorithms that can be applied to identify or extract salient visualizable structures from collections of information. Information visualization systems that integrate different types of technologies to address problems in different domains are then surveyed; and we move an to a survey and critique of visualization system evaluation studies. The chapter concludes with a summary and identification of future research directions.
  3. Heo, M.; Hirtle, S.C.: ¬An empirical comparison of visualization tools to assist information retrieval on the Web (2001) 0.02
    0.020427063 = product of:
      0.08170825 = sum of:
        0.08170825 = weight(_text_:objects in 5215) [ClassicSimilarity], result of:
          0.08170825 = score(doc=5215,freq=2.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.23489517 = fieldWeight in 5215, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=5215)
      0.25 = coord(1/4)
    
    Abstract
    The reader of a hypertext document in a web environment, if maximum use of the document is to be obtained, must visualize the overall structure of the paths through the document as well as the document space. Graphic visualization displays of this space, produced to assist in navigation, are classified into four groups, and Heo and Hirtle compare three of these classes as to their effectiveness. Distortion displays expand regions of interest while relatively diminishing the detail of the remaining regions. This technique will show both local detail and global structure. Zoom techniques use a series of increasingly focused displays of smaller and smaller areas, and can reduce cogitative overload, but do not provide an easy movement to other parts of the total space. Expanding outline displays use a tree structure to allow movement through a hierarchy of documents, but if the organization has a wide horizontal structure, or is not particularly hierarchical in nature such display can break down. Three dimensional layouts, which are not evaluated here, place objects by location in three space, providing more information and freedom. However, the space must be represented in two dimensions resulting in difficulty in visually judging depth, size and positioning. Ten students were assigned to each of eight groups composed of viewers of the three techniques and an unassisted control group using either a large (583 selected pages) or a small (50 selected pages) web space. Sets of 10 questions, which were designed to elicit the use of a visualization tool, were provided for each space. Accuracy and time spent were extracted from a log file. Users views were also surveyed after completion. ANOVA shows significant differences in accuracy and time based upon the visualization tool in use. A Tukey test shows zoom accuracy to be significantly less than expanding outline and zoom time to be significantly greater than both the outline and control groups. Size significantly affected accuracy and time, but had no interaction with tool type. While the expanding tool class out performed zoom and distortion, its performance was not significantly different from the control group.
  4. Boyack, K.W.; Wylie, B.N.; Davidson, G.S.: Domain visualization using VxInsight®) [register mark] for science and technology management (2002) 0.02
    0.020427063 = product of:
      0.08170825 = sum of:
        0.08170825 = weight(_text_:objects in 5244) [ClassicSimilarity], result of:
          0.08170825 = score(doc=5244,freq=2.0), product of:
            0.34784988 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06544595 = queryNorm
            0.23489517 = fieldWeight in 5244, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=5244)
      0.25 = coord(1/4)
    
    Abstract
    Boyack, Wylie, and Davidson developed VxInsight which transforms information from documents into a landscape representation which conveys information on the implicit structure of the data as context for queries and exploration. From a list of pre-computed similarities it creates on a plane an x,y location for each item, or can compute its own similarities based on direct and co-citation linkages. Three-dimensional overlays are then generated on the plane to show the extent of clustering at particular points. Metadata associated with clustered objects provides a label for each peak from common words. Clicking on an object will provide citation information and answer sets for queries run will be displayed as markers on the landscape. A time slider allows a view of terrain changes over time. In a test on the microsystems engineering literature a review article was used to provide seed terms to search Science Citation Index and retrieve 20,923 articles of which 13,433 were connected by citation to at least one other article in the set. The citation list was used to calculate similarity measures and x.y coordinates for each article. Four main categories made up the landscape with 90% of the articles directly related to one or more of the four. A second test used five databases: SCI, Cambridge Scientific Abstracts, Engineering Index, INSPEC, and Medline to extract 17,927 unique articles by Sandia, Los Alamos National Laboratory, and Lawrence Livermore National Laboratory, with text of abstracts and RetrievalWare 6.6 utilized to generate the similarity measures. The subsequent map revealed that despite some overlap the laboratories generally publish in different areas. A third test on 3000 physical science journals utilized 4.7 million articles from SCI where similarity was the un-normalized sum of cites between journals in both directions. Physics occupies a central position, with engineering, mathematics, computing, and materials science strongly linked. Chemistry is farther removed but strongly connected.
  5. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.01
    0.013300534 = product of:
      0.053202137 = sum of:
        0.053202137 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
          0.053202137 = score(doc=1289,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.23214069 = fieldWeight in 1289, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=1289)
      0.25 = coord(1/4)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  6. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.01
    0.011083779 = product of:
      0.044335116 = sum of:
        0.044335116 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
          0.044335116 = score(doc=1397,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.19345059 = fieldWeight in 1397, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
      0.25 = coord(1/4)
    
    Date
    22. 3.2008 14:29:25
  7. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.01
    0.011083779 = product of:
      0.044335116 = sum of:
        0.044335116 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
          0.044335116 = score(doc=5272,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.19345059 = fieldWeight in 5272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5272)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 16:11:05
  8. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.01
    0.0088670235 = product of:
      0.035468094 = sum of:
        0.035468094 = weight(_text_:22 in 2659) [ClassicSimilarity], result of:
          0.035468094 = score(doc=2659,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.15476047 = fieldWeight in 2659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=2659)
      0.25 = coord(1/4)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  9. Information visualization in data mining and knowledge discovery (2002) 0.00
    0.0044335118 = product of:
      0.017734047 = sum of:
        0.017734047 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
          0.017734047 = score(doc=1789,freq=2.0), product of:
            0.22918057 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06544595 = queryNorm
            0.07738023 = fieldWeight in 1789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
      0.25 = coord(1/4)
    
    Date
    23. 3.2008 19:10:22