Search (43 results, page 1 of 3)

  • × theme_ss:"Visualisierung"
  1. Information visualization in data mining and knowledge discovery (2002) 0.03
    0.03121005 = product of:
      0.10923517 = sum of:
        0.025441473 = weight(_text_:computer in 1789) [ClassicSimilarity], result of:
          0.025441473 = score(doc=1789,freq=10.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.18057145 = fieldWeight in 1789, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
        0.08379369 = sum of:
          0.07334675 = weight(_text_:aufsatzsammlung in 1789) [ClassicSimilarity], result of:
            0.07334675 = score(doc=1789,freq=8.0), product of:
              0.25295308 = queryWeight, product of:
                6.5610886 = idf(docFreq=169, maxDocs=44218)
                0.038553525 = queryNorm
              0.28996187 = fieldWeight in 1789, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                6.5610886 = idf(docFreq=169, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
          0.010446941 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.010446941 = score(doc=1789,freq=2.0), product of:
              0.13500787 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.038553525 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
      0.2857143 = coord(2/7)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
    RSWK
    Data Mining / Visualisierung / Aufsatzsammlung (BVB)
    Wissensextraktion / Visualisierung / Aufsatzsammlung (BVB)
    Subject
    Data Mining / Visualisierung / Aufsatzsammlung (BVB)
    Wissensextraktion / Visualisierung / Aufsatzsammlung (BVB)
  2. Information visualization : human-centered issues and perspectives (2008) 0.03
    0.026649835 = product of:
      0.093274415 = sum of:
        0.028444434 = weight(_text_:computer in 3285) [ClassicSimilarity], result of:
          0.028444434 = score(doc=3285,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.20188503 = fieldWeight in 3285, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3285)
        0.06482998 = product of:
          0.12965997 = sum of:
            0.12965997 = weight(_text_:aufsatzsammlung in 3285) [ClassicSimilarity], result of:
              0.12965997 = score(doc=3285,freq=4.0), product of:
                0.25295308 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.038553525 = queryNorm
                0.51258504 = fieldWeight in 3285, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3285)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    RSWK
    Information / Visualisierung / Aufsatzsammlung
    Series
    Lecture notes in computer science ; 4950
    Subject
    Information / Visualisierung / Aufsatzsammlung
  3. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.02
    0.023716064 = product of:
      0.08300622 = sum of:
        0.056888867 = weight(_text_:computer in 2755) [ClassicSimilarity], result of:
          0.056888867 = score(doc=2755,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.40377006 = fieldWeight in 2755, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.078125 = fieldNorm(doc=2755)
        0.026117353 = product of:
          0.052234706 = sum of:
            0.052234706 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.052234706 = score(doc=2755,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    1. 2.2016 18:25:22
    Series
    Lecture notes in computer science ; 9398
  4. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.02
    0.021368874 = product of:
      0.07479106 = sum of:
        0.059120644 = weight(_text_:computer in 3693) [ClassicSimilarity], result of:
          0.059120644 = score(doc=3693,freq=6.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.41961014 = fieldWeight in 3693, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=3693)
        0.015670411 = product of:
          0.031340823 = sum of:
            0.031340823 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.031340823 = score(doc=3693,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Generally, Computer Science (CS) classifications are inconsistent in taxonomy strategies. t is necessary to develop CS taxonomy research to combine its historical perspective, its current knowledge and its predicted future trends - including all breakthroughs in information and communication technology. In this paper we have analyzed the ACM Computing Classification System (CCS) by means of visualization maps. The important achievement of current work is an effective visualization of classified documents from the ACM Digital Library. From the technical point of view, the innovation lies in the parallel use of analysis units: (sub)classes and keywords as well as a spherical 3D information surface. We have compared both the thematic and semantic maps of classified documents and results presented in Table 1. Furthermore, the proposed new method is used for content-related evaluation of the original scheme. Summing up: we improved an original ACM classification in the Computer Science domain by means of visualization.
    Date
    22. 7.2010 19:36:46
  5. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.02
    0.0152243385 = product of:
      0.05328518 = sum of:
        0.040226504 = weight(_text_:computer in 1397) [ClassicSimilarity], result of:
          0.040226504 = score(doc=1397,freq=4.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.28550854 = fieldWeight in 1397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.013058676 = product of:
          0.026117353 = sum of:
            0.026117353 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
              0.026117353 = score(doc=1397,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.19345059 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Date
    22. 3.2008 14:29:25
    LCSH
    User interfaces (Computer systems)
    Subject
    User interfaces (Computer systems)
  6. IEEE symposium on information visualization 2003 : Seattle, Washington, October 19 - 21, 2003 ; InfoVis 2003. Proceedings (2003) 0.01
    0.014537985 = product of:
      0.10176589 = sum of:
        0.10176589 = weight(_text_:computer in 1455) [ClassicSimilarity], result of:
          0.10176589 = score(doc=1455,freq=10.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.7222858 = fieldWeight in 1455, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=1455)
      0.14285715 = coord(1/7)
    
    Issue
    Sponsored by IEEE Computer Society Technical Committee on Visualization and Graphics.
    LCSH
    Computer graphics / Congresses
    Computer vision / Congresses
    Subject
    Computer graphics / Congresses
    Computer vision / Congresses
  7. Representation in scientific practice revisited (2014) 0.01
    0.0074091414 = product of:
      0.051863987 = sum of:
        0.051863987 = product of:
          0.103727974 = sum of:
            0.103727974 = weight(_text_:aufsatzsammlung in 3543) [ClassicSimilarity], result of:
              0.103727974 = score(doc=3543,freq=4.0), product of:
                0.25295308 = queryWeight, product of:
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.038553525 = queryNorm
                0.41006804 = fieldWeight in 3543, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.5610886 = idf(docFreq=169, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3543)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    RSWK
    Naturwissenschaften / Visualisierung / Wissenschaftsforschung / Wissensrepräsentation / Aufsatzsammlung / Bildliche Darstellung / Methodologie (BVB)
    Subject
    Naturwissenschaften / Visualisierung / Wissenschaftsforschung / Wissensrepräsentation / Aufsatzsammlung / Bildliche Darstellung / Methodologie (BVB)
  8. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.01
    0.007114819 = product of:
      0.024901865 = sum of:
        0.01706666 = weight(_text_:computer in 3035) [ClassicSimilarity], result of:
          0.01706666 = score(doc=3035,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.12113102 = fieldWeight in 3035, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3035)
        0.007835206 = product of:
          0.015670411 = sum of:
            0.015670411 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
              0.015670411 = score(doc=3035,freq=2.0), product of:
                0.13500787 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038553525 = queryNorm
                0.116070345 = fieldWeight in 3035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3035)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Content
    Bill Howe and his colleagues at the University of Washington, in Seattle, decided to find out. First, they trained a computer algorithm to distinguish between various sorts of figures-which they defined as diagrams, equations, photographs, plots (such as bar charts and scatter graphs) and tables. They exposed their algorithm to between 400 and 600 images of each of these types of figure until it could distinguish them with an accuracy greater than 90%. Then they set it loose on the more-than-650,000 papers (containing more than 10m figures) stored on PubMed Central, an online archive of biomedical-research articles. To measure each paper's influence, they calculated its article-level Eigenfactor score-a modified version of the PageRank algorithm Google uses to provide the most relevant results for internet searches. Eigenfactor scoring gives a better measure than simply noting the number of times a paper is cited elsewhere, because it weights citations by their influence. A citation in a paper that is itself highly cited is worth more than one in a paper that is not.
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
  9. Barton, P.: ¬A missed opportunity : why the benefits of information visualisation seem still out of sight (2005) 0.01
    0.006895972 = product of:
      0.0482718 = sum of:
        0.0482718 = weight(_text_:computer in 1293) [ClassicSimilarity], result of:
          0.0482718 = score(doc=1293,freq=4.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.34261024 = fieldWeight in 1293, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=1293)
      0.14285715 = coord(1/7)
    
    Abstract
    This paper aims to identify what information visualisation is and how in conjunction with the computer it can be used as a tool to expand understanding. It also seeks to explain how information visualisation has been fundamental to the development of the computer from its very early days to Apple's launch of the now ubiquitous W.I.M.P (Windows, Icon, Menu, Program) graphical user interface in 1984. An attempt is also made to question why after many years of progress and development though the late 1960s and 1970s, very little has changed in the way we interact with the data on our computers since the watershed of the Macintosh and in conclusion where the future of information visualisation may lie.
  10. Pfeffer, M.; Eckert, K.; Stuckenschmidt, H.: Visual analysis of classification systems and library collections (2008) 0.01
    0.006501585 = product of:
      0.045511093 = sum of:
        0.045511093 = weight(_text_:computer in 317) [ClassicSimilarity], result of:
          0.045511093 = score(doc=317,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.32301605 = fieldWeight in 317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0625 = fieldNorm(doc=317)
      0.14285715 = coord(1/7)
    
    Series
    Lecture notes in computer science ; 5173
  11. Hearst, M.A.: Search user interfaces (2009) 0.01
    0.006501585 = product of:
      0.045511093 = sum of:
        0.045511093 = weight(_text_:computer in 4029) [ClassicSimilarity], result of:
          0.045511093 = score(doc=4029,freq=8.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.32301605 = fieldWeight in 4029, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.03125 = fieldNorm(doc=4029)
      0.14285715 = coord(1/7)
    
    LCSH
    User interfaces (Computer systems)
    Human / computer interaction
    Subject
    User interfaces (Computer systems)
    Human / computer interaction
  12. Catarci, T.; Spaccapietra, S.: Visual information querying (2002) 0.01
    0.0064505916 = product of:
      0.04515414 = sum of:
        0.04515414 = weight(_text_:computer in 4268) [ClassicSimilarity], result of:
          0.04515414 = score(doc=4268,freq=14.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.32048255 = fieldWeight in 4268, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4268)
      0.14285715 = coord(1/7)
    
    Abstract
    Computers have become our companions in many of the activities we pursue in our life. They assist us, in particular, in searching relevant information that is needed to perform a variety of tasks, from professional usage to personal entertainment. They hold this information in a huge number of heterogeneous sources, either dedicated to a specific user community (e.g., enterprise databases) or maintained for the general public (e.g., websites and digital libraries). Whereas progress in basic information technology is nowadays capable of guaranteeing effective information management, information retrieval and dissemination has become a core issue that needs further accomplishments to achieve user satisfaction. The research communities in databases, information retrieval, information visualization, and human-computer interaction have already largely investigated these domains. However, the technical environment has so dramatically evolved in recent years, inducing a parallel and very significant evolution in user habits and expectations, that new approaches are definitely needed to meet current demand. One of the most evident and significant changes is the human-computer interaction paradigm. Traditional interactions relayed an programming to express user information requirements in formal code and an textual output to convey to users the information extracted by the system. Except for professional data-intensive application frameworks, still in the hands of computer speciahsts, we have basically moved away from this pattern both in terms of expressing information requests and conveying results. The new goal is direct interaction with the final user (the person who is looking for information and is not necessarily familiar with computer technology). The key motto to achieve this is "go visual." The well-known high bandwidth of the human-vision channel allows both recognition and understanding of large quantities of information in no more than a few seconds. Thus, for instance, if the result of an information request can be organized as a visual display, or a sequence of visual displays, the information throughput is immensely superior to the one that can be achieved using textual support. User interaction becomes an iterative query-answer game that very rapidly leads to the desired final result. Conversely, the system can provide efficient visual support for easy query formulation. Displaying a visual representation of the information space, for instance, lets users directly point at the information they are looking for, without any need to be trained into the complex syntax of current query languages. Alternatively, users can navigate in the information space, following visible paths that will lead them to the targeted items. Again, thanks to the visual support, users are able to easily understand how to formulate queries and they are likely to achieve the task more rapidly and less prone to errors than with traditional textual interaction modes.
    The two facets of "going visual" are usually referred to as visual query systems, for query formulation, and information visualization, for result display. Visual Query Systems (VQSs) are defined as systems for querying databases that use a visual representation to depict the domain of interest and express related requests. VQSs provide both a language to express the queries in a visual format and a variety of functionalities to facilitate user-system interaction. As such, they are oriented toward a wide spectrum of users, especially novices who have limited computer expertise and generally ignore the inner structure of the accessed database. Information visualization, an increasingly important subdiscipline within the field of Human-Computer Interaction (HCI), focuses an visual mechanisms designed to communicate clearly to the user the structure of information and improve an the cost of accessing large data repositories. In printed form, information visualization has included the display of numerical data (e.g., bar charts, plot charts, pie charts), combinatorial relations (e.g., drawings of graphs), and geographic data (e.g., encoded maps). In addition to these "static" displays, computer-based systems, such as the Information Visualizer and Dynamic Queries, have coupled powerful visualization techniques (e.g., 3D, animation) with near real-time interactivity (i.e., the ability of the system to respond quickly to the user's direct manipulation commands). Information visualization is tightly combined with querying capabilities in some recent database-centered approaches. More opportunities for information visualization in a database environment may be found today in data mining and data warehousing applications, which typically access large data repositories. The enormous quantity of information sources an the World-Wide Web (WWW) available to users with diverse capabilities also calls for visualization techniques. In this article, we survey the main features and main proposals for visual query systems and touch upon the visualization of results mainly discussing traditional visualization forms. A discussion of modern database visualization techniques may be found elsewhere. Many related articles by Daniel Keim are available at http://www. informatik.uni-halle.de/dbs/publications.html.
  13. Huang, S.-C.; Bias, R.G.; Schnyer, D.: How are icons processed by the brain? : Neuroimaging measures of four types of visual stimuli used in information systems (2015) 0.01
    0.0057466435 = product of:
      0.040226504 = sum of:
        0.040226504 = weight(_text_:computer in 1725) [ClassicSimilarity], result of:
          0.040226504 = score(doc=1725,freq=4.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.28550854 = fieldWeight in 1725, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1725)
      0.14285715 = coord(1/7)
    
    Abstract
    We sought to understand how users interpret meanings of symbols commonly used in information systems, especially how icons are processed by the brain. We investigated Chinese and English speakers' processing of 4 types of visual stimuli: icons, pictures, Chinese characters, and English words. The goal was to examine, via functional magnetic resonance imaging (fMRI) data, the hypothesis that people cognitively process icons as logographic words and to provide neurological evidence related to human-computer interaction (HCI), which has been rare in traditional information system studies. According to the neuroimaging data of 19 participants, we conclude that icons are not cognitively processed as logographical words like Chinese characters, although they both stimulate the semantic system in the brain that is needed for language processing. Instead, more similar to images and pictures, icons are not as efficient as words in conveying meanings, and brains (people) make more effort to process icons than words. We use this study to demonstrate that it is practicable to test information system constructs such as elements of graphical user interfaces (GUIs) with neuroscience data and that, with such data, we can better understand individual or group differences related to system usage and user-computer interactions.
  14. Ross, A.: Screen-based information design : Eforms (2000) 0.01
    0.0056888866 = product of:
      0.039822206 = sum of:
        0.039822206 = weight(_text_:computer in 712) [ClassicSimilarity], result of:
          0.039822206 = score(doc=712,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.28263903 = fieldWeight in 712, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=712)
      0.14285715 = coord(1/7)
    
    Abstract
    The study surveys and delineates the processes involved in screen-based information design. This is specifically in relation to the creation of electronic forms and from this offers a guide to their production. The study also examines the design and technological issues associated with the transfer, or translation, of the printed form to the computer screen. How an Eform might be made more visually engaging without detracting from the information relevant to the form's navigation and completion. Also, the interaction between technology and (document) structure where technology can eliminate or reduce traditional structural problems through the application of non-linear strategies. It reviews the potential solutions of incorporating improved functionality through interactivity.
  15. Lin, X.; Bui, Y.: Information visualization (2009) 0.01
    0.0056888866 = product of:
      0.039822206 = sum of:
        0.039822206 = weight(_text_:computer in 3818) [ClassicSimilarity], result of:
          0.039822206 = score(doc=3818,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.28263903 = fieldWeight in 3818, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3818)
      0.14285715 = coord(1/7)
    
    Abstract
    The goal of information visualization (IV) is to amplify human cognition through computer-generated, interactive, and visual data representation. By combining the computational power with human perceptional and associative capabilities, IV will make it easier for users to navigate through large amounts of information, discover patterns or hidden structures of the information, and understand semantics of the information space. This entry reviews the history and background of IV and discusses its basic principles with pointers to relevant resources. The entry also summarizes major IV techniques and toolkits and shows various examples of IV applications.
  16. Dushay, N.: Visualizing bibliographic metadata : a virtual (book) spine viewer (2004) 0.01
    0.005451745 = product of:
      0.038162213 = sum of:
        0.038162213 = weight(_text_:computer in 1197) [ClassicSimilarity], result of:
          0.038162213 = score(doc=1197,freq=10.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.2708572 = fieldWeight in 1197, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1197)
      0.14285715 = coord(1/7)
    
    Abstract
    User interfaces for digital information discovery often require users to click around and read a lot of text in order to find the text they want to read-a process that is often frustrating and tedious. This is exacerbated because of the limited amount of text that can be displayed on a computer screen. To improve the user experience of computer mediated information discovery, information visualization techniques are applied to the digital library context, while retaining traditional information organization concepts. In this article, the "virtual (book) spine" and the virtual spine viewer are introduced. The virtual spine viewer is an application which allows users to visually explore large information spaces or collections while also allowing users to hone in on individual resources of interest. The virtual spine viewer introduced here is an alpha prototype, presented to promote discussion and further work. Information discovery changed radically with the introduction of computerized library access catalogs, the World Wide Web and its search engines, and online bookstores. Yet few instances of these technologies provide a user experience analogous to walking among well-organized, well-stocked bookshelves-which many people find useful as well as pleasurable. To put it another way, many of us have heard or voiced complaints about the paucity of "online browsing"-but what does this really mean? In traditional information spaces such as libraries, often we can move freely among the books and other resources. When we walk among organized, labeled bookshelves, we get a sense of the information space-we take in clues, perhaps unconsciously, as to the scope of the collection, the currency of resources, the frequency of their use, etc. We also enjoy unexpected discoveries such as finding an interesting resource because library staff deliberately located it near similar resources, or because it was miss-shelved, or because we saw it on a bookshelf on the way to the water fountain.
    When our experience of information discovery is mediated by a computer, we neither move ourselves nor the monitor. We have only the computer's monitor to view, and the keyboard and/or mouse to manipulate what is displayed there. Computer interfaces often reduce our ability to get a sense of the contents of a library: we don't perceive the scope of the library: its breadth, (the quantity of materials/information), its density (how full the shelves are, how thorough the collection is for individual topics), or the general audience for the materials (e.g., whether the materials are appropriate for middle school students, college professors, etc.). Additionally, many computer interfaces for information discovery require users to scroll through long lists, to click numerous navigational links and to read a lot of text to find the exact text they want to read. Text features of resources are almost always presented alphabetically, and the number of items in these alphabetical lists sometimes can be very long. Alphabetical ordering is certainly an improvement over no ordering, but it generally has no bearing on features with an inherent non-alphabetical ordering (e.g., dates of historical events), nor does it necessarily group similar items together. Alphabetical ordering of resources is analogous to one of the most familiar complaints about dictionaries: sometimes you need to know how to spell a word in order to look up its correct spelling in the dictionary. Some have used technology to replicate the appearance of physical libraries, presenting rooms of bookcases and shelves of book spines in virtual 3D environments. This approach presents a problem, as few book spines can be displayed legibly on a monitor screen. This article examines the role of book spines, call numbers, and other traditional organizational and information discovery concepts, and integrates this knowledge with information visualization techniques to show how computers and monitors can meet or exceed similar information discovery methods. The goal is to tap the unique potentials of current information visualization approaches in order to improve information discovery, offer new services, and most important of all, improve user satisfaction. We need to capitalize on what computers do well while bearing in mind their limitations. The intent is to design GUIs to optimize utility and provide a positive experience for the user.
  17. Zhu, B.; Chen, H.: Information visualization (2004) 0.00
    0.0049267206 = product of:
      0.034487043 = sum of:
        0.034487043 = weight(_text_:computer in 4276) [ClassicSimilarity], result of:
          0.034487043 = score(doc=4276,freq=6.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.24477258 = fieldWeight in 4276, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
      0.14285715 = coord(1/7)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
    Visualization can be classified as scientific visualization, software visualization, or information visualization. Although the data differ, the underlying techniques have much in common. They use the same elements (visual cues) and follow the same rules of combining visual cues to deliver patterns. They all involve understanding human perception (Encarnacao, Foley, Bryson, & Feiner, 1994) and require domain knowledge (Tufte, 1990). Because most decisions are based an unstructured information, such as text documents, Web pages, or e-mail messages, this chapter focuses an the visualization of unstructured textual documents. The chapter reviews information visualization techniques developed over the last decade and examines how they have been applied in different domains. The first section provides the background by describing visualization history and giving overviews of scientific, software, and information visualization as well as the perceptual aspects of visualization. The next section assesses important visualization techniques that convert abstract information into visual objects and facilitate navigation through displays an a computer screen. It also explores information analysis algorithms that can be applied to identify or extract salient visualizable structures from collections of information. Information visualization systems that integrate different types of technologies to address problems in different domains are then surveyed; and we move an to a survey and critique of visualization system evaluation studies. The chapter concludes with a summary and identification of future research directions.
  18. Soylu, A.; Giese, M.; Jimenez-Ruiz, E.; Kharlamov, E.; Zheleznyakov, D.; Horrocks, I.: Towards exploiting query history for adaptive ontology-based visual query formulation (2014) 0.00
    0.0048761885 = product of:
      0.03413332 = sum of:
        0.03413332 = weight(_text_:computer in 1576) [ClassicSimilarity], result of:
          0.03413332 = score(doc=1576,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.24226204 = fieldWeight in 1576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=1576)
      0.14285715 = coord(1/7)
    
    Series
    Communications in computer and information science; 478
  19. Bornmann, L.; Haunschild, R.: Overlay maps based on Mendeley data : the use of altmetrics for readership networks (2016) 0.00
    0.0048761885 = product of:
      0.03413332 = sum of:
        0.03413332 = weight(_text_:computer in 3230) [ClassicSimilarity], result of:
          0.03413332 = score(doc=3230,freq=2.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.24226204 = fieldWeight in 3230, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.046875 = fieldNorm(doc=3230)
      0.14285715 = coord(1/7)
    
    Abstract
    Visualization of scientific results using networks has become popular in scientometric research. We provide base maps for Mendeley reader count data using the publication year 2012 from the Web of Science data. Example networks are shown and explained. The reader can use our base maps to visualize other results with the VOSViewer. The proposed overlay maps are able to show the impact of publications in terms of readership data. The advantage of using our base maps is that it is not necessary for the user to produce a network based on all data (e.g., from 1 year), but can collect the Mendeley data for a single institution (or journals, topics) and can match them with our already produced information. Generation of such large-scale networks is still a demanding task despite the available computer power and digital data availability. Therefore, it is very useful to have base maps and create the network with the overlay technique.
  20. Burnett, R.: How images think (2004) 0.00
    0.0043003946 = product of:
      0.03010276 = sum of:
        0.03010276 = weight(_text_:computer in 3884) [ClassicSimilarity], result of:
          0.03010276 = score(doc=3884,freq=14.0), product of:
            0.14089422 = queryWeight, product of:
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.038553525 = queryNorm
            0.21365504 = fieldWeight in 3884, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.6545093 = idf(docFreq=3109, maxDocs=44218)
              0.015625 = fieldNorm(doc=3884)
      0.14285715 = coord(1/7)
    
    Footnote
    Rez. in: JASIST 56(2005) no.10, S.1126-1128 (P.K. Nayar): "How Images Think is an exercise both in philosophical meditation and critical theorizing about media, images, affects, and cognition. Burnett combines the insights of neuroscience with theories of cognition and the computer sciences. He argues that contemporary metaphors - biological or mechanical - about either cognition, images, or computer intelligence severely limit our understanding of the image. He suggests in his introduction that "image" refers to the "complex set of interactions that constitute everyday life in image-worlds" (p. xviii). For Burnett the fact that increasing amounts of intelligence are being programmed into technologies and devices that use images as their main form of interaction and communication-computers, for instance-suggests that images are interfaces, structuring interaction, people, and the environment they share. New technologies are not simply extensions of human abilities and needs-they literally enlarge cultural and social preconceptions of the relationship between body and mind. The flow of information today is part of a continuum, with exceptional events standing as punctuation marks. This flow connects a variety of sources, some of which are continuous - available 24 hours - or "live" and radically alters issues of memory and history. Television and the Internet, notes Burnett, are not simply a simulated world-they are the world, and the distinctions between "natural" and "non-natural" have disappeared. Increasingly, we immerse ourselves in the image, as if we are there. We rarely become conscious of the fact that we are watching images of events-for all perceptioe, cognitive, and interpretive purposes, the image is the event for us. The proximity and distance of viewer from/with the viewed has altered so significantly that the screen is us. However, this is not to suggest that we are simply passive consumers of images. As Burnett points out, painstakingly, issues of creativity are involved in the process of visualization-viewwes generate what they see in the images. This involves the historical moment of viewing-such as viewing images of the WTC bombings-and the act of re-imagining. As Burnett puts it, "the questions about what is pictured and what is real have to do with vantage points [of the viewer] and not necessarily what is in the image" (p. 26). In his second chapter Burnett moves an to a discussion of "imagescapes." Analyzing the analogue-digital programming of images, Burnett uses the concept of "reverie" to describe the viewing experience. The reverie is a "giving in" to the viewing experience, a "state" in which conscious ("I am sitting down an this sofa to watch TV") and unconscious (pleasure, pain, anxiety) processes interact. Meaning emerges in the not-always easy or "clean" process of hybridization. This "enhances" the thinking process beyond the boundaries of either image or subject. Hybridization is the space of intelligence, exchange, and communication.
    Moving an to virtual images, Burnett posits the existence of "microcultures": places where people take control of the means of creation and production in order to makes sense of their social and cultural experiences. Driven by the need for community, such microcultures generate specific images as part of a cultural movement (Burnett in fact argues that microcultures make it possible for a "small cinema of twenty-five seats to become part of a cultural movement" [p. 63]), where the process of visualization-which involves an awareness of the historical moment - is central to the info-world and imagescapes presented. The computer becomms an archive, a history. The challenge is not only of preserving information, but also of extracting information. Visualization increasingly involves this process of picking a "vantage point" in order to selectively assimilate the information. In virtual reality systems, and in the digital age in general, the distance between what is being pictured and what is experienced is overcome. Images used to be treated as opaque or transparent films among experience, perception, and thought. But, now, images are taken to another level, where the viewer is immersed in the image-experience. Burnett argues-though this is hardly a fresh insight-that "interactivity is only possible when images are the raw material used by participants to change if not transform the purpose of their viewing experience" (p. 90). He suggests that a work of art, "does not start its life as an image ... it gains the status of image when it is placed into a context of viewing and visualization" (p. 90). With simulations and cyberspace the viewing experience has been changed utterly. Burnett defines simulation as "mapping different realities into images that have an environmental, cultural, and social form" (p. 95). However, the emphasis in Burnett is significant-he suggests that interactivity is not achieved through effects, but as a result of experiences attached to stories. Narrative is not merely the effect of technology-it is as much about awareness as it is about Fantasy. Heightened awareness, which is popular culture's aim at all times, and now available through head-mounted displays (HMD), also involves human emotions and the subtleties of human intuition.
    The sixth chapter looks at this interfacing of humans and machines and begins with a series of questions. The crucial one, to my mind, is this: "Does the distinction between humans and technology contribute to a lack of understanding of the continuous interrelationship and interdependence that exists between humans and all of their creations?" (p. 125) Burnett suggests that to use biological or mechanical views of the computer/mind (the computer as an input/output device) Limits our understanding of the ways in which we interact with machines. He thus points to the role of language, the conversations (including the one we held with machines when we were children) that seem to suggest a wholly different kind of relationship. Peer-to-peer communication (P2P), which is arguably the most widely used exchange mode of images today, is the subject of chapter seven. The issue here is whether P2P affects community building or community destruction. Burnett argues that the trope of community can be used to explore the flow of historical events that make up a continuum-from 17th-century letter writing to e-mail. In the new media-and Burnett uses the example of popular music which can be sampled, and reedited to create new compositions - the interpretive space is more flexible. Private networks can be set up, and the process of information retrieval (about which Burnett has already expended considerable space in the early chapters) involves a lot more of visualization. P2P networks, as Burnett points out, are about information management. They are about the harmony between machines and humans, and constitute a new ecology of communications. Turning to computer games, Burnett looks at the processes of interaction, experience, and reconstruction in simulated artificial life worlds, animations, and video images. For Burnett (like Andrew Darley, 2000 and Richard Doyle, 2003) the interactivity of the new media games suggests a greater degree of engagement with imageworlds. Today many facets of looking, listening, and gazing can be turned into aesthetic forms with the new media. Digital technology literally reanimates the world, as Burnett demonstrates in bis concluding chapter. Burnett concludes that images no longer simply represent the world-they shape our very interaction with it; they become the foundation for our understanding the spaces, places, and historical moments that we inhabit. Burnett concludes his book with the suggestion that intelligence is now a distributed phenomenon (here closely paralleling Katherine Hayles' argument that subjectivity is dispersed through the cybernetic circuit, 1999). There is no one center of information or knowledge. Intersections of human creativity, work, and connectivity "spread" (Burnett's term) "intelligence through the use of mediated devices and images, as well as sounds" (p. 221).
    Burnett's work is a useful basic primer an the new media. One of the chief attractions here is his clear language, devoid of the jargon of either computer sciences or advanced critical theory. This makes How Images Think an accessible introduction to digital cultures. Burnett explores the impact of the new technologies an not just image-making but an image-effects, and the ways in which images constitute our ecologies of identity, communication, and subject-hood. While some of the sections seem a little too basic (especially where he speaks about the ways in which we constitute an object as an object of art, see above), especially in the wake of reception theory, it still remains a starting point for those interested in cultural studies of the new media. The Gase Burnett makes out for the transformation of the ways in which we look at images has been strengthened by his attention to the history of this transformation-from photography through television and cinema and now to immersive virtual reality systems. Joseph Koemer (2004) has pointed out that the iconoclasm of early modern Europe actually demonstrates how idolatory was integral to the image-breakers' core belief. As Koerner puts it, "images never go away ... they persist and function by being perpetually destroyed" (p. 12). Burnett, likewise, argues that images in new media are reformed to suit new contexts of meaning-production-even when they appear to be destroyed. Images are recast, and the degree of their realism (or fantasy) heightened or diminished-but they do not "go away." Images do think, but-if I can parse Burnett's entire work-they think with, through, and in human intelligence, emotions, and intuitions. Images are uncanny-they are both us and not-us, ours and not-ours. There is, surprisingly, one factual error. Burnett claims that Myron Kreuger pioneered the term "virtual reality." To the best of my knowledge, it was Jaron Lanier who did so (see Featherstone & Burrows, 1998 [1995], p. 5)."

Years

Languages

  • e 36
  • d 6
  • a 1
  • More… Less…

Types

Subjects