Search (28 results, page 1 of 2)

  • × theme_ss:"Visualisierung"
  1. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.03
    0.029997721 = product of:
      0.089993164 = sum of:
        0.089993164 = sum of:
          0.05813029 = weight(_text_:networks in 5272) [ClassicSimilarity], result of:
            0.05813029 = score(doc=5272,freq=2.0), product of:
              0.22247115 = queryWeight, product of:
                4.72992 = idf(docFreq=1060, maxDocs=44218)
                0.047034867 = queryNorm
              0.26129362 = fieldWeight in 5272, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.72992 = idf(docFreq=1060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5272)
          0.031862877 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
            0.031862877 = score(doc=5272,freq=2.0), product of:
              0.1647081 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047034867 = queryNorm
              0.19345059 = fieldWeight in 5272, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5272)
      0.33333334 = coord(1/3)
    
    Abstract
    This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
    Date
    22. 7.2006 16:11:05
  2. Osinska, V.; Kowalska, M.; Osinski, Z.: ¬The role of visualization in the shaping and exploration of the individual information space : part 1 (2018) 0.03
    0.029997721 = product of:
      0.089993164 = sum of:
        0.089993164 = sum of:
          0.05813029 = weight(_text_:networks in 4641) [ClassicSimilarity], result of:
            0.05813029 = score(doc=4641,freq=2.0), product of:
              0.22247115 = queryWeight, product of:
                4.72992 = idf(docFreq=1060, maxDocs=44218)
                0.047034867 = queryNorm
              0.26129362 = fieldWeight in 4641, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.72992 = idf(docFreq=1060, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4641)
          0.031862877 = weight(_text_:22 in 4641) [ClassicSimilarity], result of:
            0.031862877 = score(doc=4641,freq=2.0), product of:
              0.1647081 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.047034867 = queryNorm
              0.19345059 = fieldWeight in 4641, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4641)
      0.33333334 = coord(1/3)
    
    Abstract
    Studies on the state and structure of digital knowledge concerning science generally relate to macro and meso scales. Supported by visualizations, these studies can deliver knowledge about emerging scientific fields or collaboration between countries, scientific centers, or groups of researchers. Analyses of individual activities or single scientific career paths are rarely presented and discussed. The authors decided to fill this gap and developed a web application for visualizing the scientific output of particular researchers. This free software based on bibliographic data from local databases, provides six layouts for analysis. Researchers can see the dynamic characteristics of their own writing activity, the time and place of publication, and the thematic scope of research problems. They can also identify cooperation networks, and consequently, study the dependencies and regularities in their own scientific activity. The current article presents the results of a study of the application's usability and functionality as well as attempts to define different user groups. A survey about the interface was sent to select researchers employed at Nicolaus Copernicus University. The results were used to answer the question as to whether such a specialized visualization tool can significantly augment the individual information space of the contemporary researcher.
    Date
    21.12.2018 17:22:13
  3. Bornmann, L.; Haunschild, R.: Overlay maps based on Mendeley data : the use of altmetrics for readership networks (2016) 0.02
    0.023252118 = product of:
      0.06975635 = sum of:
        0.06975635 = product of:
          0.1395127 = sum of:
            0.1395127 = weight(_text_:networks in 3230) [ClassicSimilarity], result of:
              0.1395127 = score(doc=3230,freq=8.0), product of:
                0.22247115 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.047034867 = queryNorm
                0.6271047 = fieldWeight in 3230, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3230)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Visualization of scientific results using networks has become popular in scientometric research. We provide base maps for Mendeley reader count data using the publication year 2012 from the Web of Science data. Example networks are shown and explained. The reader can use our base maps to visualize other results with the VOSViewer. The proposed overlay maps are able to show the impact of publications in terms of readership data. The advantage of using our base maps is that it is not necessary for the user to produce a network based on all data (e.g., from 1 year), but can collect the Mendeley data for a single institution (or journals, topics) and can match them with our already produced information. Generation of such large-scale networks is still a demanding task despite the available computer power and digital data availability. Therefore, it is very useful to have base maps and create the network with the overlay technique.
  4. Petrovich, E.: Science mapping and science maps (2021) 0.01
    0.013424616 = product of:
      0.04027385 = sum of:
        0.04027385 = product of:
          0.0805477 = sum of:
            0.0805477 = weight(_text_:networks in 595) [ClassicSimilarity], result of:
              0.0805477 = score(doc=595,freq=6.0), product of:
                0.22247115 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.047034867 = queryNorm
                0.3620591 = fieldWeight in 595, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.03125 = fieldNorm(doc=595)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Science maps are visual representations of the structure and dynamics of scholarly knowl­edge. They aim to show how fields, disciplines, journals, scientists, publications, and scientific terms relate to each other. Science mapping is the body of methods and techniques that have been developed for generating science maps. This entry is an introduction to science maps and science mapping. It focuses on the conceptual, theoretical, and methodological issues of science mapping, rather than on the mathematical formulation of science mapping techniques. After a brief history of science mapping, we describe the general procedure for building a science map, presenting the data sources and the methods to select, clean, and pre-process the data. Next, we examine in detail how the most common types of science maps, namely the citation-based and the term-based, are generated. Both are based on networks: the former on the network of publications connected by citations, the latter on the network of terms co-occurring in publications. We review the rationale behind these mapping approaches, as well as the techniques and methods to build the maps (from the extraction of the network to the visualization and enrichment of the map). We also present less-common types of science maps, including co-authorship networks, interlocking editorship networks, maps based on patents' data, and geographic maps of science. Moreover, we consider how time can be represented in science maps to investigate the dynamics of science. We also discuss some epistemological and sociological topics that can help in the interpretation, contextualization, and assessment of science maps. Then, we present some possible applications of science maps in science policy. In the conclusion, we point out why science mapping may be interesting for all the branches of meta-science, from knowl­edge organization to epistemology.
  5. Fowler, R.H.; Wilson, B.A.; Fowler, W.A.L.: Information navigator : an information system using associative networks for display and retrieval (1992) 0.01
    0.011626059 = product of:
      0.034878176 = sum of:
        0.034878176 = product of:
          0.06975635 = sum of:
            0.06975635 = weight(_text_:networks in 919) [ClassicSimilarity], result of:
              0.06975635 = score(doc=919,freq=2.0), product of:
                0.22247115 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.047034867 = queryNorm
                0.31355235 = fieldWeight in 919, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=919)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  6. Leydesdorff, L.; Persson, O.: Mapping the geography of science : distribution patterns and networks of relations among cities and institutes (2010) 0.01
    0.011626059 = product of:
      0.034878176 = sum of:
        0.034878176 = product of:
          0.06975635 = sum of:
            0.06975635 = weight(_text_:networks in 3704) [ClassicSimilarity], result of:
              0.06975635 = score(doc=3704,freq=2.0), product of:
                0.22247115 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.047034867 = queryNorm
                0.31355235 = fieldWeight in 3704, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3704)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  7. Zhu, Y.; Yan, E.; Song, I.-Y..: ¬The use of a graph-based system to improve bibliographic information retrieval : system design, implementation, and evaluation (2017) 0.01
    0.011626059 = product of:
      0.034878176 = sum of:
        0.034878176 = product of:
          0.06975635 = sum of:
            0.06975635 = weight(_text_:networks in 3356) [ClassicSimilarity], result of:
              0.06975635 = score(doc=3356,freq=2.0), product of:
                0.22247115 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.047034867 = queryNorm
                0.31355235 = fieldWeight in 3356, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3356)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this article, we propose a graph-based interactive bibliographic information retrieval system-GIBIR. GIBIR provides an effective way to retrieve bibliographic information. The system represents bibliographic information as networks and provides a form-based query interface. Users can develop their queries interactively by referencing the system-generated graph queries. Complex queries such as "papers on information retrieval, which were cited by John's papers that had been presented in SIGIR" can be effectively answered by the system. We evaluate the proposed system by developing another relational database-based bibliographic information retrieval system with the same interface and functions. Experiment results show that the proposed system executes the same queries much faster than the relational database-based system, and on average, our system reduced the execution time by 72% (for 3-node query), 89% (for 4-node query), and 99% (for 5-node query).
  8. Munzner, T.: Interactive visualization of large graphs and networks (2000) 0.01
    0.0109611545 = product of:
      0.03288346 = sum of:
        0.03288346 = product of:
          0.06576692 = sum of:
            0.06576692 = weight(_text_:networks in 4746) [ClassicSimilarity], result of:
              0.06576692 = score(doc=4746,freq=4.0), product of:
                0.22247115 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.047034867 = queryNorm
                0.29562 = fieldWeight in 4746, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4746)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.
  9. Quirin, A.; Cordón, O.; Santamaría, J.; Vargas-Quesada, B.; Moya-Anegón, F.: ¬A new variant of the Pathfinder algorithm to generate large visual science maps in cubic time (2008) 0.01
    0.0109611545 = product of:
      0.03288346 = sum of:
        0.03288346 = product of:
          0.06576692 = sum of:
            0.06576692 = weight(_text_:networks in 2112) [ClassicSimilarity], result of:
              0.06576692 = score(doc=2112,freq=4.0), product of:
                0.22247115 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.047034867 = queryNorm
                0.29562 = fieldWeight in 2112, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2112)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In the last few years, there is an increasing interest to generate visual representations of very large scientific domains. A methodology based on the combined use of ISI-JCR category cocitation and social networks analysis through the use of the Pathfinder algorithm has demonstrated its ability to achieve high quality, schematic visualizations for these kinds of domains. Now, the next step would be to generate these scientograms in an on-line fashion. To do so, there is a need to significantly decrease the run time of the latter pruning technique when working with category cocitation matrices of a large dimension like the ones handled in these large domains (Pathfinder has a time complexity order of O(n4), with n being the number of categories in the cocitation matrix, i.e., the number of nodes in the network). Although a previous improvement called Binary Pathfinder has already been proposed to speed up the original algorithm, its significant time complexity reduction is not enough for that aim. In this paper, we make use of a different shortest path computation from classical approaches in computer science graph theory to propose a new variant of the Pathfinder algorithm which allows us to reduce its time complexity in one order of magnitude, O(n3), and thus to significantly decrease the run time of the implementation when applied to large scientific domains considering the parameter q = n - 1. Besides, the new algorithm has a much simpler structure than the Binary Pathfinder as well as it saves a significant amount of memory with respect to the original Pathfinder by reducing the space complexity to the need of just storing two matrices. An experimental comparison will be developed using large networks from real-world domains to show the good performance of the new proposal.
  10. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.010620959 = product of:
      0.031862877 = sum of:
        0.031862877 = product of:
          0.063725755 = sum of:
            0.063725755 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.063725755 = score(doc=3406,freq=2.0), product of:
                0.1647081 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047034867 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    30. 5.2010 16:22:35
  11. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.01
    0.010620959 = product of:
      0.031862877 = sum of:
        0.031862877 = product of:
          0.063725755 = sum of:
            0.063725755 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.063725755 = score(doc=2755,freq=2.0), product of:
                0.1647081 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047034867 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 2.2016 18:25:22
  12. Leydesdorff, L.: Visualization of the citation impact environments of scientific journals : an online mapping exercise (2007) 0.01
    0.009688382 = product of:
      0.029065145 = sum of:
        0.029065145 = product of:
          0.05813029 = sum of:
            0.05813029 = weight(_text_:networks in 82) [ClassicSimilarity], result of:
              0.05813029 = score(doc=82,freq=2.0), product of:
                0.22247115 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.047034867 = queryNorm
                0.26129362 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=82)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Aggregated journal-journal citation networks based on the Journal Citation Reports 2004 of the Science Citation Index (5,968 journals) and the Social Science Citation Index (1,712 journals) are made accessible from the perspective of any of these journals. A vector-space model Is used for normalization, and the results are brought online at http://www.leydesdorff.net/jcr04 as input files for the visualization program Pajek. The user is thus able to analyze the citation environment in terms of links and graphs. Furthermore, the local impact of a journal is defined as its share of the total citations in the specific journal's citation environments; the vertical size of the nodes is varied proportionally to this citation impact. The horizontal size of each node can be used to provide the same information after correction for within-journal (self-)citations. In the "citing" environment, the equivalents of this measure can be considered as a citation activity index which maps how the relevant journal environment is perceived by the collective of authors of a given journal. As a policy application, the mechanism of Interdisciplinary developments among the sciences is elaborated for the case of nanotechnology journals.
  13. Zou, J.; Thoma, G.; Antani, S.: Unified deep neural network for segmentation and labeling of multipanel biomedical figures (2020) 0.01
    0.009688382 = product of:
      0.029065145 = sum of:
        0.029065145 = product of:
          0.05813029 = sum of:
            0.05813029 = weight(_text_:networks in 10) [ClassicSimilarity], result of:
              0.05813029 = score(doc=10,freq=2.0), product of:
                0.22247115 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.047034867 = queryNorm
                0.26129362 = fieldWeight in 10, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=10)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Recent efforts in biomedical visual question answering (VQA) research rely on combined information gathered from the image content and surrounding text supporting the figure. Biomedical journals are a rich source of information for such multimodal content indexing. For multipanel figures in these journals, it is critical to develop automatic figure panel splitting and label recognition algorithms to associate individual panels with text metadata in the figure caption and the body of the article. Challenges in this task include large variations in figure panel layout, label location, size, contrast to background, and so on. In this work, we propose a deep convolutional neural network, which splits the panels and recognizes the panel labels in a single step. Visual features are extracted from several layers at various depths of the backbone neural network and organized to form a feature pyramid. These features are fed into classification and regression networks to generate candidates of panels and their labels. These candidates are merged to create the final panel segmentation result through a beam search algorithm. We evaluated the proposed algorithm on the ImageCLEF data set and achieved better performance than the results reported in the literature. In order to thoroughly investigate the proposed algorithm, we also collected and annotated our own data set of 10,642 figures. The experiments, trained on 9,642 figures and evaluated on the remaining 1,000 figures, show that combining panel splitting and panel label recognition mutually benefit each other.
  14. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.009012183 = product of:
      0.02703655 = sum of:
        0.02703655 = product of:
          0.0540731 = sum of:
            0.0540731 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.0540731 = score(doc=3355,freq=4.0), product of:
                0.1647081 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047034867 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  15. Trunk, D.: Semantische Netze in Informationssystemen : Verbesserung der Suche durch Interaktion und Visualisierung (2005) 0.01
    0.0074346713 = product of:
      0.022304013 = sum of:
        0.022304013 = product of:
          0.044608027 = sum of:
            0.044608027 = weight(_text_:22 in 2500) [ClassicSimilarity], result of:
              0.044608027 = score(doc=2500,freq=2.0), product of:
                0.1647081 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047034867 = queryNorm
                0.2708308 = fieldWeight in 2500, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2500)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    30. 1.2007 18:22:41
  16. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.01
    0.0063725756 = product of:
      0.019117726 = sum of:
        0.019117726 = product of:
          0.038235452 = sum of:
            0.038235452 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.038235452 = score(doc=1289,freq=2.0), product of:
                0.1647081 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047034867 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  17. Thissen, F.: Screen-Design-Handbuch : Effektiv informieren und kommunizieren mit Multimedia (2001) 0.01
    0.0063725756 = product of:
      0.019117726 = sum of:
        0.019117726 = product of:
          0.038235452 = sum of:
            0.038235452 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.038235452 = score(doc=1781,freq=2.0), product of:
                0.1647081 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047034867 = queryNorm
                0.23214069 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 14:35:21
  18. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.01
    0.0063725756 = product of:
      0.019117726 = sum of:
        0.019117726 = product of:
          0.038235452 = sum of:
            0.038235452 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.038235452 = score(doc=3693,freq=2.0), product of:
                0.1647081 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047034867 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2010 19:36:46
  19. Jäger-Dengler-Harles, I.: Informationsvisualisierung und Retrieval im Fokus der Infromationspraxis (2013) 0.01
    0.0063725756 = product of:
      0.019117726 = sum of:
        0.019117726 = product of:
          0.038235452 = sum of:
            0.038235452 = weight(_text_:22 in 1709) [ClassicSimilarity], result of:
              0.038235452 = score(doc=1709,freq=2.0), product of:
                0.1647081 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.047034867 = queryNorm
                0.23214069 = fieldWeight in 1709, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1709)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    4. 2.2015 9:22:39
  20. Burnett, R.: How images think (2004) 0.01
    0.0054805772 = product of:
      0.01644173 = sum of:
        0.01644173 = product of:
          0.03288346 = sum of:
            0.03288346 = weight(_text_:networks in 3884) [ClassicSimilarity], result of:
              0.03288346 = score(doc=3884,freq=4.0), product of:
                0.22247115 = queryWeight, product of:
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.047034867 = queryNorm
                0.14781 = fieldWeight in 3884, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.72992 = idf(docFreq=1060, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3884)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    The sixth chapter looks at this interfacing of humans and machines and begins with a series of questions. The crucial one, to my mind, is this: "Does the distinction between humans and technology contribute to a lack of understanding of the continuous interrelationship and interdependence that exists between humans and all of their creations?" (p. 125) Burnett suggests that to use biological or mechanical views of the computer/mind (the computer as an input/output device) Limits our understanding of the ways in which we interact with machines. He thus points to the role of language, the conversations (including the one we held with machines when we were children) that seem to suggest a wholly different kind of relationship. Peer-to-peer communication (P2P), which is arguably the most widely used exchange mode of images today, is the subject of chapter seven. The issue here is whether P2P affects community building or community destruction. Burnett argues that the trope of community can be used to explore the flow of historical events that make up a continuum-from 17th-century letter writing to e-mail. In the new media-and Burnett uses the example of popular music which can be sampled, and reedited to create new compositions - the interpretive space is more flexible. Private networks can be set up, and the process of information retrieval (about which Burnett has already expended considerable space in the early chapters) involves a lot more of visualization. P2P networks, as Burnett points out, are about information management. They are about the harmony between machines and humans, and constitute a new ecology of communications. Turning to computer games, Burnett looks at the processes of interaction, experience, and reconstruction in simulated artificial life worlds, animations, and video images. For Burnett (like Andrew Darley, 2000 and Richard Doyle, 2003) the interactivity of the new media games suggests a greater degree of engagement with imageworlds. Today many facets of looking, listening, and gazing can be turned into aesthetic forms with the new media. Digital technology literally reanimates the world, as Burnett demonstrates in bis concluding chapter. Burnett concludes that images no longer simply represent the world-they shape our very interaction with it; they become the foundation for our understanding the spaces, places, and historical moments that we inhabit. Burnett concludes his book with the suggestion that intelligence is now a distributed phenomenon (here closely paralleling Katherine Hayles' argument that subjectivity is dispersed through the cybernetic circuit, 1999). There is no one center of information or knowledge. Intersections of human creativity, work, and connectivity "spread" (Burnett's term) "intelligence through the use of mediated devices and images, as well as sounds" (p. 221).

Years

Languages

  • e 23
  • d 4
  • a 1
  • More… Less…

Types