Search (27 results, page 1 of 2)

  • × theme_ss:"Visualisierung"
  1. Batorowska, H.; Kaminska-Czubala, B.: Information retrieval support : visualisation of the information space of a document (2014) 0.04
    0.04296155 = product of:
      0.12888464 = sum of:
        0.060419887 = weight(_text_:21st in 1444) [ClassicSimilarity], result of:
          0.060419887 = score(doc=1444,freq=2.0), product of:
            0.2381352 = queryWeight, product of:
              5.74105 = idf(docFreq=385, maxDocs=44218)
              0.041479383 = queryNorm
            0.25372094 = fieldWeight in 1444, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.74105 = idf(docFreq=385, maxDocs=44218)
              0.03125 = fieldNorm(doc=1444)
        0.06846476 = sum of:
          0.045985226 = weight(_text_:century in 1444) [ClassicSimilarity], result of:
            0.045985226 = score(doc=1444,freq=2.0), product of:
              0.20775084 = queryWeight, product of:
                5.0085325 = idf(docFreq=802, maxDocs=44218)
                0.041479383 = queryNorm
              0.22134796 = fieldWeight in 1444, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.0085325 = idf(docFreq=802, maxDocs=44218)
                0.03125 = fieldNorm(doc=1444)
          0.022479536 = weight(_text_:22 in 1444) [ClassicSimilarity], result of:
            0.022479536 = score(doc=1444,freq=2.0), product of:
              0.14525373 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041479383 = queryNorm
              0.15476047 = fieldWeight in 1444, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1444)
      0.33333334 = coord(2/6)
    
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  2. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.03
    0.025455397 = product of:
      0.076366186 = sum of:
        0.05950654 = weight(_text_:history in 1289) [ClassicSimilarity], result of:
          0.05950654 = score(doc=1289,freq=2.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.3083858 = fieldWeight in 1289, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.046875 = fieldNorm(doc=1289)
        0.01685965 = product of:
          0.0337193 = sum of:
            0.0337193 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.0337193 = score(doc=1289,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    QVIZ will research and create a framework for visualizing and querying archival resources by a time-space interface based on maps and emergent knowledge structures. The framework will also integrate social software, such as wikis, in order to utilize knowledge in existing and new communities of practice. QVIZ will lead to improved information sharing and knowledge creation, easier access to information in a user-adapted context and innovative ways of exploring and visualizing materials over time, between countries and other administrative units. The common European framework for sharing and accessing archival information provided by the QVIZ project will open a considerably larger commercial market based on archival materials as well as a richer understanding of European history.
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  3. Kocijan, K.: Visualizing natural language resources (2015) 0.03
    0.025174957 = product of:
      0.15104973 = sum of:
        0.15104973 = weight(_text_:21st in 2995) [ClassicSimilarity], result of:
          0.15104973 = score(doc=2995,freq=2.0), product of:
            0.2381352 = queryWeight, product of:
              5.74105 = idf(docFreq=385, maxDocs=44218)
              0.041479383 = queryNorm
            0.6343024 = fieldWeight in 2995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.74105 = idf(docFreq=385, maxDocs=44218)
              0.078125 = fieldNorm(doc=2995)
      0.16666667 = coord(1/6)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  4. Burnett, R.: How images think (2004) 0.02
    0.01528414 = product of:
      0.04585242 = sum of:
        0.034356114 = weight(_text_:history in 3884) [ClassicSimilarity], result of:
          0.034356114 = score(doc=3884,freq=6.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.17804661 = fieldWeight in 3884, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.015625 = fieldNorm(doc=3884)
        0.011496306 = product of:
          0.022992613 = sum of:
            0.022992613 = weight(_text_:century in 3884) [ClassicSimilarity], result of:
              0.022992613 = score(doc=3884,freq=2.0), product of:
                0.20775084 = queryWeight, product of:
                  5.0085325 = idf(docFreq=802, maxDocs=44218)
                  0.041479383 = queryNorm
                0.11067398 = fieldWeight in 3884, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0085325 = idf(docFreq=802, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3884)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Footnote
    Rez. in: JASIST 56(2005) no.10, S.1126-1128 (P.K. Nayar): "How Images Think is an exercise both in philosophical meditation and critical theorizing about media, images, affects, and cognition. Burnett combines the insights of neuroscience with theories of cognition and the computer sciences. He argues that contemporary metaphors - biological or mechanical - about either cognition, images, or computer intelligence severely limit our understanding of the image. He suggests in his introduction that "image" refers to the "complex set of interactions that constitute everyday life in image-worlds" (p. xviii). For Burnett the fact that increasing amounts of intelligence are being programmed into technologies and devices that use images as their main form of interaction and communication-computers, for instance-suggests that images are interfaces, structuring interaction, people, and the environment they share. New technologies are not simply extensions of human abilities and needs-they literally enlarge cultural and social preconceptions of the relationship between body and mind. The flow of information today is part of a continuum, with exceptional events standing as punctuation marks. This flow connects a variety of sources, some of which are continuous - available 24 hours - or "live" and radically alters issues of memory and history. Television and the Internet, notes Burnett, are not simply a simulated world-they are the world, and the distinctions between "natural" and "non-natural" have disappeared. Increasingly, we immerse ourselves in the image, as if we are there. We rarely become conscious of the fact that we are watching images of events-for all perceptioe, cognitive, and interpretive purposes, the image is the event for us. The proximity and distance of viewer from/with the viewed has altered so significantly that the screen is us. However, this is not to suggest that we are simply passive consumers of images. As Burnett points out, painstakingly, issues of creativity are involved in the process of visualization-viewwes generate what they see in the images. This involves the historical moment of viewing-such as viewing images of the WTC bombings-and the act of re-imagining. As Burnett puts it, "the questions about what is pictured and what is real have to do with vantage points [of the viewer] and not necessarily what is in the image" (p. 26). In his second chapter Burnett moves an to a discussion of "imagescapes." Analyzing the analogue-digital programming of images, Burnett uses the concept of "reverie" to describe the viewing experience. The reverie is a "giving in" to the viewing experience, a "state" in which conscious ("I am sitting down an this sofa to watch TV") and unconscious (pleasure, pain, anxiety) processes interact. Meaning emerges in the not-always easy or "clean" process of hybridization. This "enhances" the thinking process beyond the boundaries of either image or subject. Hybridization is the space of intelligence, exchange, and communication.
    Moving an to virtual images, Burnett posits the existence of "microcultures": places where people take control of the means of creation and production in order to makes sense of their social and cultural experiences. Driven by the need for community, such microcultures generate specific images as part of a cultural movement (Burnett in fact argues that microcultures make it possible for a "small cinema of twenty-five seats to become part of a cultural movement" [p. 63]), where the process of visualization-which involves an awareness of the historical moment - is central to the info-world and imagescapes presented. The computer becomms an archive, a history. The challenge is not only of preserving information, but also of extracting information. Visualization increasingly involves this process of picking a "vantage point" in order to selectively assimilate the information. In virtual reality systems, and in the digital age in general, the distance between what is being pictured and what is experienced is overcome. Images used to be treated as opaque or transparent films among experience, perception, and thought. But, now, images are taken to another level, where the viewer is immersed in the image-experience. Burnett argues-though this is hardly a fresh insight-that "interactivity is only possible when images are the raw material used by participants to change if not transform the purpose of their viewing experience" (p. 90). He suggests that a work of art, "does not start its life as an image ... it gains the status of image when it is placed into a context of viewing and visualization" (p. 90). With simulations and cyberspace the viewing experience has been changed utterly. Burnett defines simulation as "mapping different realities into images that have an environmental, cultural, and social form" (p. 95). However, the emphasis in Burnett is significant-he suggests that interactivity is not achieved through effects, but as a result of experiences attached to stories. Narrative is not merely the effect of technology-it is as much about awareness as it is about Fantasy. Heightened awareness, which is popular culture's aim at all times, and now available through head-mounted displays (HMD), also involves human emotions and the subtleties of human intuition.
    The sixth chapter looks at this interfacing of humans and machines and begins with a series of questions. The crucial one, to my mind, is this: "Does the distinction between humans and technology contribute to a lack of understanding of the continuous interrelationship and interdependence that exists between humans and all of their creations?" (p. 125) Burnett suggests that to use biological or mechanical views of the computer/mind (the computer as an input/output device) Limits our understanding of the ways in which we interact with machines. He thus points to the role of language, the conversations (including the one we held with machines when we were children) that seem to suggest a wholly different kind of relationship. Peer-to-peer communication (P2P), which is arguably the most widely used exchange mode of images today, is the subject of chapter seven. The issue here is whether P2P affects community building or community destruction. Burnett argues that the trope of community can be used to explore the flow of historical events that make up a continuum-from 17th-century letter writing to e-mail. In the new media-and Burnett uses the example of popular music which can be sampled, and reedited to create new compositions - the interpretive space is more flexible. Private networks can be set up, and the process of information retrieval (about which Burnett has already expended considerable space in the early chapters) involves a lot more of visualization. P2P networks, as Burnett points out, are about information management. They are about the harmony between machines and humans, and constitute a new ecology of communications. Turning to computer games, Burnett looks at the processes of interaction, experience, and reconstruction in simulated artificial life worlds, animations, and video images. For Burnett (like Andrew Darley, 2000 and Richard Doyle, 2003) the interactivity of the new media games suggests a greater degree of engagement with imageworlds. Today many facets of looking, listening, and gazing can be turned into aesthetic forms with the new media. Digital technology literally reanimates the world, as Burnett demonstrates in bis concluding chapter. Burnett concludes that images no longer simply represent the world-they shape our very interaction with it; they become the foundation for our understanding the spaces, places, and historical moments that we inhabit. Burnett concludes his book with the suggestion that intelligence is now a distributed phenomenon (here closely paralleling Katherine Hayles' argument that subjectivity is dispersed through the cybernetic circuit, 1999). There is no one center of information or knowledge. Intersections of human creativity, work, and connectivity "spread" (Burnett's term) "intelligence through the use of mediated devices and images, as well as sounds" (p. 221).
    Burnett's work is a useful basic primer an the new media. One of the chief attractions here is his clear language, devoid of the jargon of either computer sciences or advanced critical theory. This makes How Images Think an accessible introduction to digital cultures. Burnett explores the impact of the new technologies an not just image-making but an image-effects, and the ways in which images constitute our ecologies of identity, communication, and subject-hood. While some of the sections seem a little too basic (especially where he speaks about the ways in which we constitute an object as an object of art, see above), especially in the wake of reception theory, it still remains a starting point for those interested in cultural studies of the new media. The Gase Burnett makes out for the transformation of the ways in which we look at images has been strengthened by his attention to the history of this transformation-from photography through television and cinema and now to immersive virtual reality systems. Joseph Koemer (2004) has pointed out that the iconoclasm of early modern Europe actually demonstrates how idolatory was integral to the image-breakers' core belief. As Koerner puts it, "images never go away ... they persist and function by being perpetually destroyed" (p. 12). Burnett, likewise, argues that images in new media are reformed to suit new contexts of meaning-production-even when they appear to be destroyed. Images are recast, and the degree of their realism (or fantasy) heightened or diminished-but they do not "go away." Images do think, but-if I can parse Burnett's entire work-they think with, through, and in human intelligence, emotions, and intuitions. Images are uncanny-they are both us and not-us, ours and not-ours. There is, surprisingly, one factual error. Burnett claims that Myron Kreuger pioneered the term "virtual reality." To the best of my knowledge, it was Jaron Lanier who did so (see Featherstone & Burrows, 1998 [1995], p. 5)."
  5. Minkov, E.; Kahanov, K.; Kuflik, T.: Graph-based recommendation integrating rating history and domain knowledge : application to on-site guidance of museum visitors (2017) 0.01
    0.011688187 = product of:
      0.07012912 = sum of:
        0.07012912 = weight(_text_:history in 3756) [ClassicSimilarity], result of:
          0.07012912 = score(doc=3756,freq=4.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.3634361 = fieldWeight in 3756, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3756)
      0.16666667 = coord(1/6)
    
    Abstract
    Visitors to museums and other cultural heritage sites encounter a wealth of exhibits in a variety of subject areas, but can explore only a small number of them. Moreover, there typically exists rich complementary information that can be delivered to the visitor about exhibits of interest, but only a fraction of this information can be consumed during the limited time of the visit. Recommender systems may help visitors to cope with this information overload. Ideally, the recommender system of choice should model user preferences, as well as background knowledge about the museum's environment, considering aspects of physical and thematic relevancy. We propose a personalized graph-based recommender framework, representing rating history and background multi-facet information jointly as a relational graph. A random walk measure is applied to rank available complementary multimedia presentations by their relevancy to a visitor's profile, integrating the various dimensions. We report the results of experiments conducted using authentic data collected at the Hecht museum. An evaluation of multiple graph variants, compared with several popular and state-of-the-art recommendation methods, indicates on advantages of the graph-based approach.
  6. Lin, X.; Bui, Y.: Information visualization (2009) 0.01
    0.011570716 = product of:
      0.069424294 = sum of:
        0.069424294 = weight(_text_:history in 3818) [ClassicSimilarity], result of:
          0.069424294 = score(doc=3818,freq=2.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.3597834 = fieldWeight in 3818, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3818)
      0.16666667 = coord(1/6)
    
    Abstract
    The goal of information visualization (IV) is to amplify human cognition through computer-generated, interactive, and visual data representation. By combining the computational power with human perceptional and associative capabilities, IV will make it easier for users to navigate through large amounts of information, discover patterns or hidden structures of the information, and understand semantics of the information space. This entry reviews the history and background of IV and discusses its basic principles with pointers to relevant resources. The entry also summarizes major IV techniques and toolkits and shows various examples of IV applications.
  7. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.01
    0.011410794 = product of:
      0.06846476 = sum of:
        0.06846476 = sum of:
          0.045985226 = weight(_text_:century in 2659) [ClassicSimilarity], result of:
            0.045985226 = score(doc=2659,freq=2.0), product of:
              0.20775084 = queryWeight, product of:
                5.0085325 = idf(docFreq=802, maxDocs=44218)
                0.041479383 = queryNorm
              0.22134796 = fieldWeight in 2659, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.0085325 = idf(docFreq=802, maxDocs=44218)
                0.03125 = fieldNorm(doc=2659)
          0.022479536 = weight(_text_:22 in 2659) [ClassicSimilarity], result of:
            0.022479536 = score(doc=2659,freq=2.0), product of:
              0.14525373 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.041479383 = queryNorm
              0.15476047 = fieldWeight in 2659, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2659)
      0.16666667 = coord(1/6)
    
    Abstract
    The Library of Congress Subject Headings (LCSH) has been developed over the course of more than a century, predating the semantic web by some time. Until the 1986, the only concept-toconcept relationship available was an undifferentiated "See Also" reference, which was used for both associative (RT) and hierarchical (BT/NT) connections. In that year, in preparation for the first release of the headings in machine readable MARC Authorities form, an attempt was made to automatically convert these "See Also" links into the standardized thesaural relations. Unfortunately, the rule used to determine the type of reference to generate relied on the presence of symmetric links to detect associatively related terms; "See Also" references that were only present in one of the related terms were assumed to be hierarchical. This left the process vulnerable to inconsistent use of references in the pre-conversion data, with a marked bias towards promoting relationships to hierarchical status. The Library of Congress was aware that the results of the conversion contained many inconsistencies, and intended to validate and correct the results over the course of time. Unfortunately, twenty years later, less than 40% of the converted records have been evaluated. The converted records, being the earliest encountered during the Library's cataloging activities, represent the most basic concepts within LCSH; errors in the syndetic structure for these records affect far more subordinate concepts than those nearer the periphery. Worse, a policy of patterning new headings after pre-existing ones leads to structural errors arising from the conversion process being replicated in these newer headings, perpetuating and exacerbating the errors. As the LCSH prepares for its second great conversion, from MARC to SKOS, it is critical to address these structural problems. As part of the work on converting the headings into SKOS, I have experimented with different visualizations of the tangled web of broader terms embedded in LCSH. This poster illustrates several of these renderings, shows how they can help users to judge which relationships might not be correct, and shows just exactly how Doorbells and Mammals are related.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  8. Zhu, B.; Chen, H.: Information visualization (2004) 0.01
    0.010020534 = product of:
      0.0601232 = sum of:
        0.0601232 = weight(_text_:history in 4276) [ClassicSimilarity], result of:
          0.0601232 = score(doc=4276,freq=6.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.31158158 = fieldWeight in 4276, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
      0.16666667 = coord(1/6)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
    Visualization can be classified as scientific visualization, software visualization, or information visualization. Although the data differ, the underlying techniques have much in common. They use the same elements (visual cues) and follow the same rules of combining visual cues to deliver patterns. They all involve understanding human perception (Encarnacao, Foley, Bryson, & Feiner, 1994) and require domain knowledge (Tufte, 1990). Because most decisions are based an unstructured information, such as text documents, Web pages, or e-mail messages, this chapter focuses an the visualization of unstructured textual documents. The chapter reviews information visualization techniques developed over the last decade and examines how they have been applied in different domains. The first section provides the background by describing visualization history and giving overviews of scientific, software, and information visualization as well as the perceptual aspects of visualization. The next section assesses important visualization techniques that convert abstract information into visual objects and facilitate navigation through displays an a computer screen. It also explores information analysis algorithms that can be applied to identify or extract salient visualizable structures from collections of information. Information visualization systems that integrate different types of technologies to address problems in different domains are then surveyed; and we move an to a survey and critique of visualization system evaluation studies. The chapter concludes with a summary and identification of future research directions.
  9. Soylu, A.; Giese, M.; Jimenez-Ruiz, E.; Kharlamov, E.; Zheleznyakov, D.; Horrocks, I.: Towards exploiting query history for adaptive ontology-based visual query formulation (2014) 0.01
    0.009917757 = product of:
      0.05950654 = sum of:
        0.05950654 = weight(_text_:history in 1576) [ClassicSimilarity], result of:
          0.05950654 = score(doc=1576,freq=2.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.3083858 = fieldWeight in 1576, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.046875 = fieldNorm(doc=1576)
      0.16666667 = coord(1/6)
    
  10. Petrovich, E.: Science mapping and science maps (2021) 0.01
    0.006611837 = product of:
      0.039671022 = sum of:
        0.039671022 = weight(_text_:history in 595) [ClassicSimilarity], result of:
          0.039671022 = score(doc=595,freq=2.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.20559052 = fieldWeight in 595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.03125 = fieldNorm(doc=595)
      0.16666667 = coord(1/6)
    
    Abstract
    Science maps are visual representations of the structure and dynamics of scholarly knowl­edge. They aim to show how fields, disciplines, journals, scientists, publications, and scientific terms relate to each other. Science mapping is the body of methods and techniques that have been developed for generating science maps. This entry is an introduction to science maps and science mapping. It focuses on the conceptual, theoretical, and methodological issues of science mapping, rather than on the mathematical formulation of science mapping techniques. After a brief history of science mapping, we describe the general procedure for building a science map, presenting the data sources and the methods to select, clean, and pre-process the data. Next, we examine in detail how the most common types of science maps, namely the citation-based and the term-based, are generated. Both are based on networks: the former on the network of publications connected by citations, the latter on the network of terms co-occurring in publications. We review the rationale behind these mapping approaches, as well as the techniques and methods to build the maps (from the extraction of the network to the visualization and enrichment of the map). We also present less-common types of science maps, including co-authorship networks, interlocking editorship networks, maps based on patents' data, and geographic maps of science. Moreover, we consider how time can be represented in science maps to investigate the dynamics of science. We also discuss some epistemological and sociological topics that can help in the interpretation, contextualization, and assessment of science maps. Then, we present some possible applications of science maps in science policy. In the conclusion, we point out why science mapping may be interesting for all the branches of meta-science, from knowl­edge organization to epistemology.
  11. Börner, K.; Chen, C.; Boyack, K.W.: Visualizing knowledge domains (2002) 0.01
    0.005785358 = product of:
      0.034712147 = sum of:
        0.034712147 = weight(_text_:history in 4286) [ClassicSimilarity], result of:
          0.034712147 = score(doc=4286,freq=2.0), product of:
            0.19296135 = queryWeight, product of:
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.041479383 = queryNorm
            0.1798917 = fieldWeight in 4286, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6519823 = idf(docFreq=1146, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4286)
      0.16666667 = coord(1/6)
    
    Abstract
    This chapter reviews visualization techniques that can be used to map the ever-growing domain structure of scientific disciplines and to support information retrieval and classification. In contrast to the comprehensive surveys conducted in traditional fashion by Howard White and Katherine McCain (1997, 1998), this survey not only reviews emerging techniques in interactive data analysis and information visualization, but also depicts the bibliographical structure of the field itself. The chapter starts by reviewing the history of knowledge domain visualization. We then present a general process flow for the visualization of knowledge domains and explain commonly used techniques. In order to visualize the domain reviewed by this chapter, we introduce a bibliographic data set of considerable size, which includes articles from the citation analysis, bibliometrics, semantics, and visualization literatures. Using tutorial style, we then apply various algorithms to demonstrate the visualization effectsl produced by different approaches and compare the results. The domain visualizations reveal the relationships within and between the four fields that together constitute the focus of this chapter. We conclude with a general discussion of research possibilities. Painting a "big picture" of scientific knowledge has long been desirable for a variety of reasons. Traditional approaches are brute forcescholars must sort through mountains of literature to perceive the outlines of their field. Obviously, this is time-consuming, difficult to replicate, and entails subjective judgments. The task is enormously complex. Sifting through recently published documents to find those that will later be recognized as important is labor intensive. Traditional approaches struggle to keep up with the pace of information growth. In multidisciplinary fields of study it is especially difficult to maintain an overview of literature dynamics. Painting the big picture of an everevolving scientific discipline is akin to the situation described in the widely known Indian legend about the blind men and the elephant. As the story goes, six blind men were trying to find out what an elephant looked like. They touched different parts of the elephant and quickly jumped to their conclusions. The one touching the body said it must be like a wall; the one touching the tail said it was like a snake; the one touching the legs said it was like a tree trunk, and so forth. But science does not stand still; the steady stream of new scientific literature creates a continuously changing structure. The resulting disappearance, fusion, and emergence of research areas add another twist to the tale-it is as if the elephant is running and dynamically changing its shape. Domain visualization, an emerging field of study, is in a similar situation. Relevant literature is spread across disciplines that have traditionally had few connections. Researchers examining the domain from a particular discipline cannot possibly have an adequate understanding of the whole. As noted by White and McCain (1997), the new generation of information scientists is technically driven in its efforts to visualize scientific disciplines. However, limited progress has been made in terms of connecting pioneers' theories and practices with the potentialities of today's enabling technologies. If the difference between past and present generations lies in the power of available technologies, what they have in common is the ultimate goal-to reveal the development of scientific knowledge.
  12. Wainer, H.: Picturing the uncertain world : how to understand, communicate, and control uncertainty through graphical display (2009) 0.01
    0.0054194108 = product of:
      0.032516465 = sum of:
        0.032516465 = product of:
          0.06503293 = sum of:
            0.06503293 = weight(_text_:century in 1451) [ClassicSimilarity], result of:
              0.06503293 = score(doc=1451,freq=4.0), product of:
                0.20775084 = queryWeight, product of:
                  5.0085325 = idf(docFreq=802, maxDocs=44218)
                  0.041479383 = queryNorm
                0.31303328 = fieldWeight in 1451, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.0085325 = idf(docFreq=802, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1451)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    In his entertaining and informative book "Graphic Discovery", Howard Wainer unlocked the power of graphical display to make complex problems clear. Now he's back with Picturing the Uncertain World, a book that explores how graphs can serve as maps to guide us when the information we have is ambiguous or incomplete. Using a visually diverse sampling of graphical display, from heartrending autobiographical displays of genocide in the Kovno ghetto to the 'Pie Chart of Mystery' in a "New Yorker" cartoon, Wainer illustrates the many ways graphs can be used - and misused - as we try to make sense of an uncertain world. "Picturing the Uncertain World" takes readers on an extraordinary graphical adventure, revealing how the visual communication of data offers answers to vexing questions yet also highlights the measure of uncertainty in almost everything we do. Are cancer rates higher or lower in rural communities? How can you know how much money to sock away for retirement when you don't know when you'll die? And where exactly did nineteenth-century novelists get their ideas? These are some of the fascinating questions Wainer invites readers to consider. Along the way he traces the origins and development of graphical display, from William Playfair, who pioneered the use of graphs in the eighteenth century, to instances today where the public has been misled through poorly designed graphs. We live in a world full of uncertainty, yet it is within our grasp to take its measure. Read "Picturing the Uncertain World" and learn how.
  13. Heuvel, C. van den; Salah, A.A.; Knowledge Space Lab: Visualizing universes of knowledge : design and visual analysis of the UDC (2011) 0.00
    0.0047901277 = product of:
      0.028740766 = sum of:
        0.028740766 = product of:
          0.05748153 = sum of:
            0.05748153 = weight(_text_:century in 4831) [ClassicSimilarity], result of:
              0.05748153 = score(doc=4831,freq=2.0), product of:
                0.20775084 = queryWeight, product of:
                  5.0085325 = idf(docFreq=802, maxDocs=44218)
                  0.041479383 = queryNorm
                0.27668494 = fieldWeight in 4831, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0085325 = idf(docFreq=802, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4831)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Abstract
    In the 1950s, the "universe of knowledge" metaphor returned in discussions around the "first theory of faceted classification'; the Colon Classification (CC) of S.R. Ranganathan, to stress the differences within an "universe of concepts" system. Here we claim that the Universal Decimal Classification (UDC) has been either ignored or incorrectly represented in studies that focused on the pivotal role of Ranganathan in a transition from "top-down universe of concepts systems" to "bottom-up universe of concepts systems." Early 20th century designs from Paul Otlet reveal a two directional interaction between "elements" and "ensembles"that can be compared to the relations between the universe of knowledge and universe of concepts systems. Moreover, an unpublished manuscript with the title "Theorie schematique de la Classification" of 1908 includes sketches that demonstrate an exploration by Paul Otlet of the multidimensional characteristics of the UDC. The interactions between these one- and multidimensional representations of the UDC support Donker Duyvis' critical comments to Ranganathan who had dismissed it as a rigid hierarchical system in comparison to his own Colon Classification. A visualization of the experiments of the Knowledge Space Lab in which main categories of Wikipedia were mapped on the UDC provides empirical evidence of its faceted structure's flexibility.
  14. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.00
    0.0046832366 = product of:
      0.02809942 = sum of:
        0.02809942 = product of:
          0.05619884 = sum of:
            0.05619884 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.05619884 = score(doc=3406,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    30. 5.2010 16:22:35
  15. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.00
    0.0046832366 = product of:
      0.02809942 = sum of:
        0.02809942 = product of:
          0.05619884 = sum of:
            0.05619884 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.05619884 = score(doc=2755,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    1. 2.2016 18:25:22
  16. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.00
    0.0039738584 = product of:
      0.023843149 = sum of:
        0.023843149 = product of:
          0.047686297 = sum of:
            0.047686297 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.047686297 = score(doc=3355,freq=4.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  17. Trunk, D.: Semantische Netze in Informationssystemen : Verbesserung der Suche durch Interaktion und Visualisierung (2005) 0.00
    0.0032782655 = product of:
      0.019669592 = sum of:
        0.019669592 = product of:
          0.039339185 = sum of:
            0.039339185 = weight(_text_:22 in 2500) [ClassicSimilarity], result of:
              0.039339185 = score(doc=2500,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.2708308 = fieldWeight in 2500, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2500)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    30. 1.2007 18:22:41
  18. Thissen, F.: Screen-Design-Handbuch : Effektiv informieren und kommunizieren mit Multimedia (2001) 0.00
    0.0028099418 = product of:
      0.01685965 = sum of:
        0.01685965 = product of:
          0.0337193 = sum of:
            0.0337193 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.0337193 = score(doc=1781,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.23214069 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    22. 3.2008 14:35:21
  19. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.00
    0.0028099418 = product of:
      0.01685965 = sum of:
        0.01685965 = product of:
          0.0337193 = sum of:
            0.0337193 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.0337193 = score(doc=3693,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    22. 7.2010 19:36:46
  20. Jäger-Dengler-Harles, I.: Informationsvisualisierung und Retrieval im Fokus der Infromationspraxis (2013) 0.00
    0.0028099418 = product of:
      0.01685965 = sum of:
        0.01685965 = product of:
          0.0337193 = sum of:
            0.0337193 = weight(_text_:22 in 1709) [ClassicSimilarity], result of:
              0.0337193 = score(doc=1709,freq=2.0), product of:
                0.14525373 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041479383 = queryNorm
                0.23214069 = fieldWeight in 1709, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1709)
          0.5 = coord(1/2)
      0.16666667 = coord(1/6)
    
    Date
    4. 2.2015 9:22:39

Languages

  • e 21
  • d 5
  • a 1
  • More… Less…

Types