Search (148 results, page 3 of 8)

  • × theme_ss:"Visualisierung"
  1. Visual thesaurus (2005) 0.00
    0.003701243 = product of:
      0.011103729 = sum of:
        0.011103729 = product of:
          0.022207458 = sum of:
            0.022207458 = weight(_text_:of in 1292) [ClassicSimilarity], result of:
              0.022207458 = score(doc=1292,freq=44.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3241498 = fieldWeight in 1292, product of:
                  6.6332498 = tf(freq=44.0), with freq of:
                    44.0 = termFreq=44.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1292)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    A visual thesaurus system and method for displaying a selected term in association with its one or more meanings, other words to which it is related, and further relationship information. The results of a search are presented in a directed graph that provides more information than an ordered list. When a user selects one of the results, the display reorganizes around the user's search allowing for further searches, without the interruption of going to additional pages.
    Content
    Traditional print reference guides often have two methods of finding information: an order (alphabetical for dictionaries and encyclopedias, by subject hierarchy in the case of thesauri) and indices (ordered lists, with a more complete listing of words and concepts, which refers back to original content from the main body of the book). A user of such traditional print reference guides who is looking for information will either browse through the ordered information in the main body of the reference book, or scan through the indices to find what is necessary. The advent of the computer allows for much more rapid electronic searches of the same information, and for multiple layers of indices. Users can either search through information by entering a keyword, or users can browse through the information through an outline index, which represents the information contained in the main body of the data. There are two traditional user interfaces for such applications. First, the user may type text into a search field and in response, a list of results is returned to the user. The user then selects a returned entry and may page through the resulting information. Alternatively, the user may choose from a list of words from an index. For example, software thesaurus applications, in which a user attempts to find synonyms, antonyms, homonyms, etc. for a selected word, are usually implemented using the conventional search and presentation techniques discussed above. The presentation of results only allows for a one-dimensional order of data at any one time. In addition, only a limited number of results can be shown at once, and selecting a result inevitably leads to another page-if the result is not satisfactory, the users must search again. Finally, it is difficult to present information about the manner in which the search results are related, or to present quantitative information about the results without causing confusion. Therefore, there exists a need for a multidimensional graphical display of information, in particular with respect to information relating to the meaning of words and their relationships to other words. There further exists a need to present large amounts of information in a way that can be manipulated by the user, without the user losing his place. And there exists a need for more fluid, intuitive and powerful thesaurus functionality that invites the exploration of language.
  2. Stodola, J.T.: ¬The concept of information and questions of users with visual disabilities : an epistemological approach (2014) 0.00
    0.003701243 = product of:
      0.011103729 = sum of:
        0.011103729 = product of:
          0.022207458 = sum of:
            0.022207458 = weight(_text_:of in 1783) [ClassicSimilarity], result of:
              0.022207458 = score(doc=1783,freq=44.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3241498 = fieldWeight in 1783, product of:
                  6.6332498 = tf(freq=44.0), with freq of:
                    44.0 = termFreq=44.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1783)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - The purpose of this paper is to evaluate the functionality of the particular epistemological schools with regard to the issues of users with visual impairment, to offer a theoretical answer to the question why these issues are not in the center of the interest of information science, and to try to find an epistemological approach that has ambitions to create the theoretical basis for the analysis of the relationship between information and visually impaired users. Design/methodology/approach - The methodological basis of the paper is determined by the selection of the epistemological approach. In order to think about the concept of information and to put it in relation to issues associated with users with visual impairment, a conceptual analysis is applied. Findings - Most of information science theories are based on empiricism and rationalism; this is the reason for their low interest in the questions of visually impaired users. Users with visual disabilities are out of the interest of rationalistic epistemology because it underestimates sensory perception; empiricism is not interested in them paradoxically because it overestimates sensory perception. Realism which fairly reflects such issues is an approach which allows the providing of information to persons with visual disabilities to be dealt with properly. Research limitations/implications - The paper has a speculative character. Its findings should be supported by empirical research in the future. Practical implications - Theoretical questions solved in the paper come from the practice of providing information to visually impaired users. Because practice has an influence on theory and vice versa, the author hopes that the findings included in the paper can serve to improve practice in the field. Social implications - The paper provides theoretical anchoring of the issues which are related to the inclusion of people with disabilities into society and its findings have a potential to support such efforts. Originality/value - This is first study linking questions of users with visual disabilities to highly abstract issues connected to the concept of information.
    Source
    Journal of documentation. 70(2014) no.5, S.782-800
  3. Hajdu Barat, A.: Human perception and knowledge organization : visual imagery (2007) 0.00
    0.0036907129 = product of:
      0.011072138 = sum of:
        0.011072138 = product of:
          0.022144277 = sum of:
            0.022144277 = weight(_text_:of in 2595) [ClassicSimilarity], result of:
              0.022144277 = score(doc=2595,freq=28.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.32322758 = fieldWeight in 2595, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2595)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose - This paper aims to explore the theory and practice of knowledge organization and its necessary connection to human perception, and shows a solution of the potential ones. Design/methodology/approach - The author attempts to survey the problem of concept-building and extension, as well as the determination of semantics in different aspects. The purpose is to find criteria for the choice of the solution that best incorporates users into the design cycles of knowledge organization systems. Findings - It is widely agreed that cognition provides the basis for concept-building; however, at the next stage of processing there is a debate. Fundamentally, what is the connection between perception and the superior cognitive processes? The perceptual method does not separate these two but rather considers them united, with perception permeating cognition. By contrast, the linguistic method considers perception as an information-receiving system. Separate from, and following, perception, the cognitive subsystems then perform information and data processing, leading to both knowledge organization and representation. We assume by that model that top-level concepts emerge from knowledge organization and representation. This paper points obvious connection of visual imagery and the internet; perceptual access of knowledge organization and information retrieval. There are some practical and characteristic solutions for the visualization of information without demand of completeness. Research limitations/implications - Librarians need to identify those semantic characteristics which stimulate a similar conceptual image both in the mind of the librarian and in the mind of the user. Originality/value - For a fresh perspective, an understanding of perception is required as well.
  4. Minkov, E.; Kahanov, K.; Kuflik, T.: Graph-based recommendation integrating rating history and domain knowledge : application to on-site guidance of museum visitors (2017) 0.00
    0.0036907129 = product of:
      0.011072138 = sum of:
        0.011072138 = product of:
          0.022144277 = sum of:
            0.022144277 = weight(_text_:of in 3756) [ClassicSimilarity], result of:
              0.022144277 = score(doc=3756,freq=28.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.32322758 = fieldWeight in 3756, product of:
                  5.2915025 = tf(freq=28.0), with freq of:
                    28.0 = termFreq=28.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3756)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Visitors to museums and other cultural heritage sites encounter a wealth of exhibits in a variety of subject areas, but can explore only a small number of them. Moreover, there typically exists rich complementary information that can be delivered to the visitor about exhibits of interest, but only a fraction of this information can be consumed during the limited time of the visit. Recommender systems may help visitors to cope with this information overload. Ideally, the recommender system of choice should model user preferences, as well as background knowledge about the museum's environment, considering aspects of physical and thematic relevancy. We propose a personalized graph-based recommender framework, representing rating history and background multi-facet information jointly as a relational graph. A random walk measure is applied to rank available complementary multimedia presentations by their relevancy to a visitor's profile, integrating the various dimensions. We report the results of experiments conducted using authentic data collected at the Hecht museum. An evaluation of multiple graph variants, compared with several popular and state-of-the-art recommendation methods, indicates on advantages of the graph-based approach.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.8, S.1911-1924
  5. Burnett, R.: How images think (2004) 0.00
    0.0036801526 = product of:
      0.0110404575 = sum of:
        0.0110404575 = product of:
          0.022080915 = sum of:
            0.022080915 = weight(_text_:of in 3884) [ClassicSimilarity], result of:
              0.022080915 = score(doc=3884,freq=174.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.32230273 = fieldWeight in 3884, product of:
                  13.190906 = tf(freq=174.0), with freq of:
                    174.0 = termFreq=174.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3884)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: JASIST 56(2005) no.10, S.1126-1128 (P.K. Nayar): "How Images Think is an exercise both in philosophical meditation and critical theorizing about media, images, affects, and cognition. Burnett combines the insights of neuroscience with theories of cognition and the computer sciences. He argues that contemporary metaphors - biological or mechanical - about either cognition, images, or computer intelligence severely limit our understanding of the image. He suggests in his introduction that "image" refers to the "complex set of interactions that constitute everyday life in image-worlds" (p. xviii). For Burnett the fact that increasing amounts of intelligence are being programmed into technologies and devices that use images as their main form of interaction and communication-computers, for instance-suggests that images are interfaces, structuring interaction, people, and the environment they share. New technologies are not simply extensions of human abilities and needs-they literally enlarge cultural and social preconceptions of the relationship between body and mind. The flow of information today is part of a continuum, with exceptional events standing as punctuation marks. This flow connects a variety of sources, some of which are continuous - available 24 hours - or "live" and radically alters issues of memory and history. Television and the Internet, notes Burnett, are not simply a simulated world-they are the world, and the distinctions between "natural" and "non-natural" have disappeared. Increasingly, we immerse ourselves in the image, as if we are there. We rarely become conscious of the fact that we are watching images of events-for all perceptioe, cognitive, and interpretive purposes, the image is the event for us. The proximity and distance of viewer from/with the viewed has altered so significantly that the screen is us. However, this is not to suggest that we are simply passive consumers of images. As Burnett points out, painstakingly, issues of creativity are involved in the process of visualization-viewwes generate what they see in the images. This involves the historical moment of viewing-such as viewing images of the WTC bombings-and the act of re-imagining. As Burnett puts it, "the questions about what is pictured and what is real have to do with vantage points [of the viewer] and not necessarily what is in the image" (p. 26). In his second chapter Burnett moves an to a discussion of "imagescapes." Analyzing the analogue-digital programming of images, Burnett uses the concept of "reverie" to describe the viewing experience. The reverie is a "giving in" to the viewing experience, a "state" in which conscious ("I am sitting down an this sofa to watch TV") and unconscious (pleasure, pain, anxiety) processes interact. Meaning emerges in the not-always easy or "clean" process of hybridization. This "enhances" the thinking process beyond the boundaries of either image or subject. Hybridization is the space of intelligence, exchange, and communication.
    Moving an to virtual images, Burnett posits the existence of "microcultures": places where people take control of the means of creation and production in order to makes sense of their social and cultural experiences. Driven by the need for community, such microcultures generate specific images as part of a cultural movement (Burnett in fact argues that microcultures make it possible for a "small cinema of twenty-five seats to become part of a cultural movement" [p. 63]), where the process of visualization-which involves an awareness of the historical moment - is central to the info-world and imagescapes presented. The computer becomms an archive, a history. The challenge is not only of preserving information, but also of extracting information. Visualization increasingly involves this process of picking a "vantage point" in order to selectively assimilate the information. In virtual reality systems, and in the digital age in general, the distance between what is being pictured and what is experienced is overcome. Images used to be treated as opaque or transparent films among experience, perception, and thought. But, now, images are taken to another level, where the viewer is immersed in the image-experience. Burnett argues-though this is hardly a fresh insight-that "interactivity is only possible when images are the raw material used by participants to change if not transform the purpose of their viewing experience" (p. 90). He suggests that a work of art, "does not start its life as an image ... it gains the status of image when it is placed into a context of viewing and visualization" (p. 90). With simulations and cyberspace the viewing experience has been changed utterly. Burnett defines simulation as "mapping different realities into images that have an environmental, cultural, and social form" (p. 95). However, the emphasis in Burnett is significant-he suggests that interactivity is not achieved through effects, but as a result of experiences attached to stories. Narrative is not merely the effect of technology-it is as much about awareness as it is about Fantasy. Heightened awareness, which is popular culture's aim at all times, and now available through head-mounted displays (HMD), also involves human emotions and the subtleties of human intuition.
    The sixth chapter looks at this interfacing of humans and machines and begins with a series of questions. The crucial one, to my mind, is this: "Does the distinction between humans and technology contribute to a lack of understanding of the continuous interrelationship and interdependence that exists between humans and all of their creations?" (p. 125) Burnett suggests that to use biological or mechanical views of the computer/mind (the computer as an input/output device) Limits our understanding of the ways in which we interact with machines. He thus points to the role of language, the conversations (including the one we held with machines when we were children) that seem to suggest a wholly different kind of relationship. Peer-to-peer communication (P2P), which is arguably the most widely used exchange mode of images today, is the subject of chapter seven. The issue here is whether P2P affects community building or community destruction. Burnett argues that the trope of community can be used to explore the flow of historical events that make up a continuum-from 17th-century letter writing to e-mail. In the new media-and Burnett uses the example of popular music which can be sampled, and reedited to create new compositions - the interpretive space is more flexible. Private networks can be set up, and the process of information retrieval (about which Burnett has already expended considerable space in the early chapters) involves a lot more of visualization. P2P networks, as Burnett points out, are about information management. They are about the harmony between machines and humans, and constitute a new ecology of communications. Turning to computer games, Burnett looks at the processes of interaction, experience, and reconstruction in simulated artificial life worlds, animations, and video images. For Burnett (like Andrew Darley, 2000 and Richard Doyle, 2003) the interactivity of the new media games suggests a greater degree of engagement with imageworlds. Today many facets of looking, listening, and gazing can be turned into aesthetic forms with the new media. Digital technology literally reanimates the world, as Burnett demonstrates in bis concluding chapter. Burnett concludes that images no longer simply represent the world-they shape our very interaction with it; they become the foundation for our understanding the spaces, places, and historical moments that we inhabit. Burnett concludes his book with the suggestion that intelligence is now a distributed phenomenon (here closely paralleling Katherine Hayles' argument that subjectivity is dispersed through the cybernetic circuit, 1999). There is no one center of information or knowledge. Intersections of human creativity, work, and connectivity "spread" (Burnett's term) "intelligence through the use of mediated devices and images, as well as sounds" (p. 221).
    Burnett's work is a useful basic primer an the new media. One of the chief attractions here is his clear language, devoid of the jargon of either computer sciences or advanced critical theory. This makes How Images Think an accessible introduction to digital cultures. Burnett explores the impact of the new technologies an not just image-making but an image-effects, and the ways in which images constitute our ecologies of identity, communication, and subject-hood. While some of the sections seem a little too basic (especially where he speaks about the ways in which we constitute an object as an object of art, see above), especially in the wake of reception theory, it still remains a starting point for those interested in cultural studies of the new media. The Gase Burnett makes out for the transformation of the ways in which we look at images has been strengthened by his attention to the history of this transformation-from photography through television and cinema and now to immersive virtual reality systems. Joseph Koemer (2004) has pointed out that the iconoclasm of early modern Europe actually demonstrates how idolatory was integral to the image-breakers' core belief. As Koerner puts it, "images never go away ... they persist and function by being perpetually destroyed" (p. 12). Burnett, likewise, argues that images in new media are reformed to suit new contexts of meaning-production-even when they appear to be destroyed. Images are recast, and the degree of their realism (or fantasy) heightened or diminished-but they do not "go away." Images do think, but-if I can parse Burnett's entire work-they think with, through, and in human intelligence, emotions, and intuitions. Images are uncanny-they are both us and not-us, ours and not-ours. There is, surprisingly, one factual error. Burnett claims that Myron Kreuger pioneered the term "virtual reality." To the best of my knowledge, it was Jaron Lanier who did so (see Featherstone & Burrows, 1998 [1995], p. 5)."
  6. Eckert, K.; Pfeffer, M.; Stuckenschmidt, H.: Assessing thesaurus-based annotations for semantic search applications (2008) 0.00
    0.0036536194 = product of:
      0.010960858 = sum of:
        0.010960858 = product of:
          0.021921717 = sum of:
            0.021921717 = weight(_text_:of in 1528) [ClassicSimilarity], result of:
              0.021921717 = score(doc=1528,freq=14.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31997898 = fieldWeight in 1528, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1528)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Statistical methods for automated document indexing are becoming an alternative to the manual assignment of keywords. We argue that the quality of the thesaurus used as a basis for indexing in regard to its ability to adequately cover the contents to be indexed and as a basis for the specific indexing method used is of crucial importance in automatic indexing. We present an interactive tool for thesaurus evaluation that is based on a combination of statistical measures and appropriate visualisation techniques that supports the detection of potential problems in a thesaurus. We describe the methods used and show that the tool supports the detection and correction of errors, leading to a better indexing result.
    Source
    International Journal of Metadata, Semantics and Ontologies. 3(2008) no.1, S.53-67
  7. Buchel, O.; Sedig, K.: Extending map-based visualizations to support visual tasks : the role of ontological properties (2011) 0.00
    0.0036536194 = product of:
      0.010960858 = sum of:
        0.010960858 = product of:
          0.021921717 = sum of:
            0.021921717 = weight(_text_:of in 2307) [ClassicSimilarity], result of:
              0.021921717 = score(doc=2307,freq=14.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31997898 = fieldWeight in 2307, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2307)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Map-based visualizations of document collections have become popular in recent times. However, most of these visualizations emphasize only geospatial properties of objects, leaving out other ontological properties. In this paper we propose to extend these visualizations to include nongeospatial properties of documents to support users with elementary and synoptic visual tasks. More specifically, additional suitable representations that can enhance the utility of map-based visualizations are discussed. To demonstrate the utility of the proposed solution, we have developed a prototype map-based visualization system using Google Maps (GM), which demonstrates how additional representations can be beneficial.
  8. Lin, X.; Bui, Y.: Information visualization (2009) 0.00
    0.0036536194 = product of:
      0.010960858 = sum of:
        0.010960858 = product of:
          0.021921717 = sum of:
            0.021921717 = weight(_text_:of in 3818) [ClassicSimilarity], result of:
              0.021921717 = score(doc=3818,freq=14.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31997898 = fieldWeight in 3818, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3818)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The goal of information visualization (IV) is to amplify human cognition through computer-generated, interactive, and visual data representation. By combining the computational power with human perceptional and associative capabilities, IV will make it easier for users to navigate through large amounts of information, discover patterns or hidden structures of the information, and understand semantics of the information space. This entry reviews the history and background of IV and discusses its basic principles with pointers to relevant resources. The entry also summarizes major IV techniques and toolkits and shows various examples of IV applications.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  9. Maaten, L. van den: Learning a parametric embedding by preserving local structure (2009) 0.00
    0.0036536194 = product of:
      0.010960858 = sum of:
        0.010960858 = product of:
          0.021921717 = sum of:
            0.021921717 = weight(_text_:of in 3883) [ClassicSimilarity], result of:
              0.021921717 = score(doc=3883,freq=14.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31997898 = fieldWeight in 3883, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3883)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.
    Source
    Proceedings of the Twelfth International Conference on Artificial Intelligence & Statistics (AI-STATS), JMLR W&CP 5, 2009. S.384-391
  10. Munzner, T.: Interactive visualization of large graphs and networks (2000) 0.00
    0.0036161449 = product of:
      0.010848435 = sum of:
        0.010848435 = product of:
          0.02169687 = sum of:
            0.02169687 = weight(_text_:of in 4746) [ClassicSimilarity], result of:
              0.02169687 = score(doc=4746,freq=42.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31669703 = fieldWeight in 4746, product of:
                  6.4807405 = tf(freq=42.0), with freq of:
                    42.0 = termFreq=42.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4746)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.
  11. Golub, K.; Ziolkowski, P.M.; Zlodi, G.: Organizing subject access to cultural heritage in Swedish online museums (2022) 0.00
    0.0036161449 = product of:
      0.010848435 = sum of:
        0.010848435 = product of:
          0.02169687 = sum of:
            0.02169687 = weight(_text_:of in 688) [ClassicSimilarity], result of:
              0.02169687 = score(doc=688,freq=42.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31669703 = fieldWeight in 688, product of:
                  6.4807405 = tf(freq=42.0), with freq of:
                    42.0 = termFreq=42.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=688)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose The study aims to paint a representative picture of the current state of search interfaces of Swedish online museum collections, focussing on search functionalities with particular reference to subject searching, as well as the use of controlled vocabularies, with the purpose of identifying which improvements of the search interfaces are needed to ensure high-quality information retrieval for the end user. Design/methodology/approach In the first step, a set of 21 search interface criteria was identified, based on related research and current standards in the domain of cultural heritage knowledge organization. Secondly, a complete set of Swedish museums that provide online access to their collections was identified, comprising nine cross-search services and 91 individual museums' websites. These 100 websites were each evaluated against the 21 criteria, between 1 July and 31 August 2020. Findings Although many standards and guidelines are in place to ensure quality-controlled subject indexing, which in turn support information retrieval of relevant resources (as individual or full search results), the study shows that they are not broadly implemented, resulting in information retrieval failures for the end user. The study also demonstrates a strong need for the implementation of controlled vocabularies in these museums. Originality/value This study is a rare piece of research which examines subject searching in online museums; the 21 search criteria and their use in the analysis of the complete set of online collections of a country represents a considerable and unique contribution to the fields of knowledge organization and information retrieval of cultural heritage. Its particular value lies in showing how the needs of end users, many of which are documented and reflected in international standards and guidelines, should be taken into account in designing search tools for these museums; especially so in subject searching, which is the most complex and yet the most common type of search. Much effort has been invested into digitizing cultural heritage collections, but access to them is hindered by poor search functionality. This study identifies which are the most important aspects to improve.
    Source
    Journal of documentation. 78(2022) no.7, S.211-247
  12. Chen, R.H.-G.; Chen, C.-M.: Visualizing the world's scientific publications (2016) 0.00
    0.00355646 = product of:
      0.0106693795 = sum of:
        0.0106693795 = product of:
          0.021338759 = sum of:
            0.021338759 = weight(_text_:of in 3124) [ClassicSimilarity], result of:
              0.021338759 = score(doc=3124,freq=26.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31146988 = fieldWeight in 3124, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3124)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Automated methods for the analysis, modeling, and visualization of large-scale scientometric data provide measures that enable the depiction of the state of world scientific development. We aimed to integrate minimum span clustering (MSC) and minimum spanning tree methods to cluster and visualize the global pattern of scientific publications (PSP) by analyzing aggregated Science Citation Index (SCI) data from 1994 to 2011. We hypothesized that PSP clustering is mainly affected by countries' geographic location, ethnicity, and level of economic development, as indicated in previous studies. Our results showed that the 100 countries with the highest rates of publications were decomposed into 12 PSP groups and that countries within a group tended to be geographically proximal, ethnically similar, or comparable in terms of economic status. Hubs and bridging nodes in each knowledge production group were identified. The performance of each group was evaluated across 16 knowledge domains based on their specialization, volume of publications, and relative impact. Awareness of the strengths and weaknesses of each group in various knowledge domains may have useful applications for examining scientific policies, adjusting the allocation of resources, and promoting international collaboration for future developments.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.10, S.2477-2488
  13. Wen, B.; Horlings, E.; Zouwen, M. van der; Besselaar, P. van den: Mapping science through bibliometric triangulation : an experimental approach applied to water research (2017) 0.00
    0.00355646 = product of:
      0.0106693795 = sum of:
        0.0106693795 = product of:
          0.021338759 = sum of:
            0.021338759 = weight(_text_:of in 3437) [ClassicSimilarity], result of:
              0.021338759 = score(doc=3437,freq=26.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31146988 = fieldWeight in 3437, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3437)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The idea of constructing science maps based on bibliographic data has intrigued researchers for decades, and various techniques have been developed to map the structure of research disciplines. Most science mapping studies use a single method. However, as research fields have various properties, a valid map of a field should actually be composed of a set of maps derived from a series of investigations using different methods. That leads to the question of what can be learned from a combination-triangulation-of these different science maps. In this paper we propose a method for triangulation, using the example of water science. We combine three different mapping approaches: journal-journal citation relations (JJCR), shared author keywords (SAK), and title word-cited reference co-occurrence (TWRC). Our results demonstrate that triangulation of JJCR, SAK, and TWRC produces a more comprehensive picture than each method applied individually. The outcomes from the three different approaches can be associated with each other and systematically interpreted to provide insights into the complex multidisciplinary structure of the field of water research.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.724-738
  14. Yan, B.; Luo, J.: Filtering patent maps for visualization of diversification paths of inventors and organizations (2017) 0.00
    0.00355646 = product of:
      0.0106693795 = sum of:
        0.0106693795 = product of:
          0.021338759 = sum of:
            0.021338759 = weight(_text_:of in 3651) [ClassicSimilarity], result of:
              0.021338759 = score(doc=3651,freq=26.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31146988 = fieldWeight in 3651, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3651)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In the information science literature, recent studies have used patent databases and patent classification information to construct network maps of patent technology classes. In such a patent technology map, almost all pairs of technology classes are connected, whereas most of the connections between them are extremely weak. This observation suggests the possibility of filtering the patent network map by removing weak links. However, removing links may reduce the explanatory power of the network on inventor or organization diversification. The network links may explain the patent portfolio diversification paths of inventors and inventing organizations. We measure the diversification explanatory power of the patent network map, and present a method to objectively choose an optimal tradeoff between explanatory power and removing weak links. We show that this method can remove a degree of arbitrariness compared with previous filtering methods based on arbitrary thresholds, and also identify previous filtering methods that created filters outside the optimal tradeoff. The filtered map aims to aid in network visualization analyses of the technological diversification of inventors, organizations, and other innovation agents, and potential foresight analysis. Such applications to a prolific inventor (Leonard Forbes) and company (Google) are demonstrated.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.6, S.1551-1563
  15. Ekström, B.: Trace data visualisation enquiry : a methodological coupling for studying information practices in relation to information systems (2022) 0.00
    0.00355646 = product of:
      0.0106693795 = sum of:
        0.0106693795 = product of:
          0.021338759 = sum of:
            0.021338759 = weight(_text_:of in 687) [ClassicSimilarity], result of:
              0.021338759 = score(doc=687,freq=26.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31146988 = fieldWeight in 687, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=687)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Purpose The purpose of this paper is to examine whether and how a methodological coupling of visualisations of trace data and interview methods can be utilised for information practices studies. Design/methodology/approach Trace data visualisation enquiry is suggested as the coupling of visualising exported data from an information system and using these visualisations as basis for interview guides and elicitation in information practices research. The methodology is illustrated and applied through a small-scale empirical study of a citizen science project. Findings The study found that trace data visualisation enquiry enabled fine-grained investigations of temporal aspects of information practices and to compare and explore temporal and geographical aspects of practices. Moreover, the methodology made possible inquiries for understanding information practices through trace data that were discussed through elicitation with participants. The study also found that it can aid a researcher of gaining a simultaneous overarching and close picture of information practices, which can lead to theoretical and methodological implications for information practices research. Originality/value Trace data visualisation enquiry extends current methods for investigating information practices as it enables focus to be placed on the traces of practices as recorded through interactions with information systems and study participants' accounts of activities.
    Source
    Journal of documentation. 78(2022) no.7, S.141-159
  16. Slavic, A.: Interface to classification : some objectives and options (2006) 0.00
    0.0035509837 = product of:
      0.010652951 = sum of:
        0.010652951 = product of:
          0.021305902 = sum of:
            0.021305902 = weight(_text_:of in 2131) [ClassicSimilarity], result of:
              0.021305902 = score(doc=2131,freq=18.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3109903 = fieldWeight in 2131, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2131)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This is a preprint to be published in the Extensions & Corrections to the UDC. The paper explains the basic functions of browsing and searching that need to be supported in relation to analytico-synthetic classifications such as Universal Decimal Classification (UDC), irrespective of any specific, real-life implementation. UDC is an example of a semi-faceted system that can be used, for instance, for both post-coordinate searching and hierarchical/facet browsing. The advantages of using a classification for IR, however, depend on the strength of the GUI, which should provide a user-friendly interface to classification browsing and searching. The power of this interface is in supporting visualisation that will 'convert' what is potentially a user-unfriendly indexing language based on symbols, to a subject presentation that is easy to understand, search and navigate. A summary of the basic functions of searching and browsing a classification that may be provided on a user-friendly interface is given and examples of classification browsing interfaces are provided.
  17. Yi, K.; Chan, L.M.: ¬A visualization software tool for Library of Congress Subject Headings (2008) 0.00
    0.0035509837 = product of:
      0.010652951 = sum of:
        0.010652951 = product of:
          0.021305902 = sum of:
            0.021305902 = weight(_text_:of in 2503) [ClassicSimilarity], result of:
              0.021305902 = score(doc=2503,freq=18.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3109903 = fieldWeight in 2503, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2503)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    The aim of this study is to develop a software tool, VisuaLCSH, for effective searching, browsing, and maintenance of LCSH. This tool enables visualizing subject headings and hierarchical structures implied and embedded in LCSH. A conceptual framework for converting the hierarchical structure of headings in LCSH to an explicit tree structure is proposed, described, and implemented. The highlights of VisuaLCSH are summarized below: 1) revealing multiple aspects of a heading; 2) normalizing the hierarchical relationships in LCSH; 3) showing multi-level hierarchies in LCSH sub-trees; 4) improving the navigational function of LCSH in retrieval; and 5) enabling the implementation of generic search, i.e., the 'exploding' feature, in searching LCSH.
    Source
    Culture and identity in knowledge organization: Proceedings of the Tenth International ISKO Conference 5-8 August 2008, Montreal, Canada. Ed. by Clément Arsenault and Joseph T. Tennis
  18. Pfeffer, M.; Eckert, K.; Stuckenschmidt, H.: Visual analysis of classification systems and library collections (2008) 0.00
    0.0035289964 = product of:
      0.010586989 = sum of:
        0.010586989 = product of:
          0.021173978 = sum of:
            0.021173978 = weight(_text_:of in 317) [ClassicSimilarity], result of:
              0.021173978 = score(doc=317,freq=10.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.3090647 = fieldWeight in 317, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=317)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    In this demonstration we present a visual analysis approach that addresses both developers and users of hierarchical classification systems. The approach supports an intuitive understanding of the structure and current use in relation to a specific collection. We will also demonstrate its application for the development and management of library collections.
    Source
    Research and advanced technology for digital libraries : proceedings of the 12th European conference, ECDL '08, Aarhus, Denmark
  19. Börner, K.; Chen, C.; Boyack, K.W.: Visualizing knowledge domains (2002) 0.00
    0.0035207155 = product of:
      0.010562146 = sum of:
        0.010562146 = product of:
          0.021124292 = sum of:
            0.021124292 = weight(_text_:of in 4286) [ClassicSimilarity], result of:
              0.021124292 = score(doc=4286,freq=52.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.30833945 = fieldWeight in 4286, product of:
                  7.2111025 = tf(freq=52.0), with freq of:
                    52.0 = termFreq=52.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4286)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This chapter reviews visualization techniques that can be used to map the ever-growing domain structure of scientific disciplines and to support information retrieval and classification. In contrast to the comprehensive surveys conducted in traditional fashion by Howard White and Katherine McCain (1997, 1998), this survey not only reviews emerging techniques in interactive data analysis and information visualization, but also depicts the bibliographical structure of the field itself. The chapter starts by reviewing the history of knowledge domain visualization. We then present a general process flow for the visualization of knowledge domains and explain commonly used techniques. In order to visualize the domain reviewed by this chapter, we introduce a bibliographic data set of considerable size, which includes articles from the citation analysis, bibliometrics, semantics, and visualization literatures. Using tutorial style, we then apply various algorithms to demonstrate the visualization effectsl produced by different approaches and compare the results. The domain visualizations reveal the relationships within and between the four fields that together constitute the focus of this chapter. We conclude with a general discussion of research possibilities. Painting a "big picture" of scientific knowledge has long been desirable for a variety of reasons. Traditional approaches are brute forcescholars must sort through mountains of literature to perceive the outlines of their field. Obviously, this is time-consuming, difficult to replicate, and entails subjective judgments. The task is enormously complex. Sifting through recently published documents to find those that will later be recognized as important is labor intensive. Traditional approaches struggle to keep up with the pace of information growth. In multidisciplinary fields of study it is especially difficult to maintain an overview of literature dynamics. Painting the big picture of an everevolving scientific discipline is akin to the situation described in the widely known Indian legend about the blind men and the elephant. As the story goes, six blind men were trying to find out what an elephant looked like. They touched different parts of the elephant and quickly jumped to their conclusions. The one touching the body said it must be like a wall; the one touching the tail said it was like a snake; the one touching the legs said it was like a tree trunk, and so forth. But science does not stand still; the steady stream of new scientific literature creates a continuously changing structure. The resulting disappearance, fusion, and emergence of research areas add another twist to the tale-it is as if the elephant is running and dynamically changing its shape. Domain visualization, an emerging field of study, is in a similar situation. Relevant literature is spread across disciplines that have traditionally had few connections. Researchers examining the domain from a particular discipline cannot possibly have an adequate understanding of the whole. As noted by White and McCain (1997), the new generation of information scientists is technically driven in its efforts to visualize scientific disciplines. However, limited progress has been made in terms of connecting pioneers' theories and practices with the potentialities of today's enabling technologies. If the difference between past and present generations lies in the power of available technologies, what they have in common is the ultimate goal-to reveal the development of scientific knowledge.
    Source
    Annual review of information science and technology. 37(2003), S.179-258
  20. Hoeber, O.; Yang, X.D.: HotMap : supporting visual exploration of Web search results (2009) 0.00
    0.0034169364 = product of:
      0.010250809 = sum of:
        0.010250809 = product of:
          0.020501617 = sum of:
            0.020501617 = weight(_text_:of in 2700) [ClassicSimilarity], result of:
              0.020501617 = score(doc=2700,freq=24.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.2992506 = fieldWeight in 2700, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2700)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Although information retrieval techniques used by Web search engines have improved substantially over the years, the results of Web searches have continued to be represented in simple list-based formats. Although the list-based representation makes it easy to evaluate a single document for relevance, it does not support the users in the broader tasks of manipulating or exploring the search results as they attempt to find a collection of relevant documents. HotMap is a meta-search system that provides a compact visual representation of Web search results at two levels of detail, and it supports interactive exploration via nested sorting of Web search results based on query term frequencies. An evaluation of the search results for a set of vague queries has shown that the re-sorted search results can provide a higher portion of relevant documents among the top search results. User studies show an increase in speed and effectiveness and a reduction in missed documents when comparing HotMap to the list-based representation used by Google. Subjective measures were positive, and users showed a preference for the HotMap interface. These results provide evidence for the utility of next-generation Web search results interfaces that promote interactive search results exploration.
    Source
    Journal of the American Society for Information Science and Technology. 60(2009) no.1, S.90-110

Years

Languages

  • e 140
  • d 7
  • a 1
  • More… Less…

Types

  • a 125
  • el 28
  • m 12
  • x 6
  • s 2
  • b 1
  • p 1
  • r 1
  • More… Less…

Subjects

Classifications