Search (54 results, page 2 of 3)

  • × theme_ss:"Visualisierung"
  1. Julien, C.-A.; Leide, J.E.; Bouthillier, F.: Controlled user evaluations of information visualization interfaces for text retrieval : literature review and meta-analysis (2008) 0.01
    0.0099652 = product of:
      0.0398608 = sum of:
        0.0398608 = weight(_text_:subject in 1718) [ClassicSimilarity], result of:
          0.0398608 = score(doc=1718,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 1718, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=1718)
      0.25 = coord(1/4)
    
    Abstract
    This review describes experimental designs (users, search tasks, measures, etc.) used by 31 controlled user studies of information visualization (IV) tools for textual information retrieval (IR) and a meta-analysis of the reported statistical effects. Comparable experimental designs allow research designers to compare their results with other reports, and support the development of experimentally verified design guidelines concerning which IV techniques are better suited to which types of IR tasks. The studies generally use a within-subject design with 15 or more undergraduate students performing browsing to known-item tasks on sets of at least 1,000 full-text articles or Web pages on topics of general interest/news. Results of the meta-analysis (N = 8) showed no significant effects of the IV tool as compared with a text-only equivalent, but the set shows great variability suggesting an inadequate basis of comparison. Experimental design recommendations are provided which would support comparison of existing IV tools for IR usability testing.
  2. Howarth, L.C.: Mapping the world of knowledge : cartograms and the diffusion of knowledge 0.01
    0.0099652 = product of:
      0.0398608 = sum of:
        0.0398608 = weight(_text_:subject in 3550) [ClassicSimilarity], result of:
          0.0398608 = score(doc=3550,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 3550, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=3550)
      0.25 = coord(1/4)
    
    Abstract
    Displaying aspects of "aboutness" by means of non-verbal representations, such as notations, symbols, or icons, or through rich visual displays, such as those of topic maps, can facilitate meaning-making, putting information in context, and situating it relative to other information. As the design of displays of web-enabled information has struggled to keep pace with a bourgeoning body of digital content, increasingly innovative approaches to organizing search results have warranted greater attention. Using Worldmapper as an example, this paper examines cartograms - a derivative of the data map which adds dimensionality to the geographic positioning of information - as one approach to representing and managing subject content, and to tracking the diffusion of knowledge across place and time.
  3. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.01
    0.0099652 = product of:
      0.0398608 = sum of:
        0.0398608 = weight(_text_:subject in 3884) [ClassicSimilarity], result of:
          0.0398608 = score(doc=3884,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.23709705 = fieldWeight in 3884, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.046875 = fieldNorm(doc=3884)
      0.25 = coord(1/4)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
  4. Minkov, E.; Kahanov, K.; Kuflik, T.: Graph-based recommendation integrating rating history and domain knowledge : application to on-site guidance of museum visitors (2017) 0.01
    0.008304333 = product of:
      0.033217333 = sum of:
        0.033217333 = weight(_text_:subject in 3756) [ClassicSimilarity], result of:
          0.033217333 = score(doc=3756,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.19758089 = fieldWeight in 3756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3756)
      0.25 = coord(1/4)
    
    Abstract
    Visitors to museums and other cultural heritage sites encounter a wealth of exhibits in a variety of subject areas, but can explore only a small number of them. Moreover, there typically exists rich complementary information that can be delivered to the visitor about exhibits of interest, but only a fraction of this information can be consumed during the limited time of the visit. Recommender systems may help visitors to cope with this information overload. Ideally, the recommender system of choice should model user preferences, as well as background knowledge about the museum's environment, considering aspects of physical and thematic relevancy. We propose a personalized graph-based recommender framework, representing rating history and background multi-facet information jointly as a relational graph. A random walk measure is applied to rank available complementary multimedia presentations by their relevancy to a visitor's profile, integrating the various dimensions. We report the results of experiments conducted using authentic data collected at the Hecht museum. An evaluation of multiple graph variants, compared with several popular and state-of-the-art recommendation methods, indicates on advantages of the graph-based approach.
  5. Leide, J.E.; Large, A.; Beheshti, J.; Brooks, M.; Cole, C.: Visualization schemes for domain novices exploring a topic space : the navigation classification scheme (2003) 0.01
    0.00806398 = product of:
      0.03225592 = sum of:
        0.03225592 = product of:
          0.06451184 = sum of:
            0.06451184 = weight(_text_:classification in 1078) [ClassicSimilarity], result of:
              0.06451184 = score(doc=1078,freq=12.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.43094325 = fieldWeight in 1078, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1078)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In this article and two other articles which conceptualize a future stage of the research program (Leide, Cole, Large, & Beheshti, submitted for publication; Cole, Leide, Large, Beheshti, & Brooks, in preparation), we map-out a domain novice user's encounter with an IR system from beginning to end so that appropriate classification-based visualization schemes can be inserted into the encounter process. This article describes the visualization of a navigation classification scheme only. The navigation classification scheme uses the metaphor of a ship and ship's navigator traveling through charted (but unknown to the user) waters, guided by a series of lighthouses. The lighthouses contain mediation interfaces linking the user to the information store through agents created for each. The user's agent is the cognitive model the user has of the information space, which the system encourages to evolve via interaction with the system's agent. The system's agent is an evolving classification scheme created by professional indexers to represent the structure of the information store. We propose a more systematic, multidimensional approach to creating evolving classification/indexing schemes, based on where the user is and what she is trying to do at that moment during the search session.
  6. Heuvel, C. van den; Salah, A.A.; Knowledge Space Lab: Visualizing universes of knowledge : design and visual analysis of the UDC (2011) 0.01
    0.00806398 = product of:
      0.03225592 = sum of:
        0.03225592 = product of:
          0.06451184 = sum of:
            0.06451184 = weight(_text_:classification in 4831) [ClassicSimilarity], result of:
              0.06451184 = score(doc=4831,freq=12.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.43094325 = fieldWeight in 4831, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4831)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In the 1950s, the "universe of knowledge" metaphor returned in discussions around the "first theory of faceted classification'; the Colon Classification (CC) of S.R. Ranganathan, to stress the differences within an "universe of concepts" system. Here we claim that the Universal Decimal Classification (UDC) has been either ignored or incorrectly represented in studies that focused on the pivotal role of Ranganathan in a transition from "top-down universe of concepts systems" to "bottom-up universe of concepts systems." Early 20th century designs from Paul Otlet reveal a two directional interaction between "elements" and "ensembles"that can be compared to the relations between the universe of knowledge and universe of concepts systems. Moreover, an unpublished manuscript with the title "Theorie schematique de la Classification" of 1908 includes sketches that demonstrate an exploration by Paul Otlet of the multidimensional characteristics of the UDC. The interactions between these one- and multidimensional representations of the UDC support Donker Duyvis' critical comments to Ranganathan who had dismissed it as a rigid hierarchical system in comparison to his own Colon Classification. A visualization of the experiments of the Knowledge Space Lab in which main categories of Wikipedia were mapped on the UDC provides empirical evidence of its faceted structure's flexibility.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
  7. Choi, I.: Visualizations of cross-cultural bibliographic classification : comparative studies of the Korean Decimal Classification and the Dewey Decimal Classification (2017) 0.01
    0.00806398 = product of:
      0.03225592 = sum of:
        0.03225592 = product of:
          0.06451184 = sum of:
            0.06451184 = weight(_text_:classification in 3869) [ClassicSimilarity], result of:
              0.06451184 = score(doc=3869,freq=12.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.43094325 = fieldWeight in 3869, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3869)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The changes in KO systems induced by sociocultural influences may include those in both classificatory principles and cultural features. The proposed study will examine the Korean Decimal Classification (KDC)'s adaptation of the Dewey Decimal Classification (DDC) by comparing the two systems. This case manifests the sociocultural influences on KOSs in a cross-cultural context. Therefore, the study aims at an in-depth investigation of sociocultural influences by situating a KOS in a cross-cultural environment and examining the dynamics between two classification systems designed to organize information resources in two distinct sociocultural contexts. As a preceding stage of the comparison, the analysis was conducted on the changes that result from the meeting of different sociocultural feature in a descriptive method. The analysis aims to identify variations between the two schemes in comparison of the knowledge structures of the two classifications, in terms of the quantity of class numbers that represent concepts and their relationships in each of the individual main classes. The most effective analytic strategy to show the patterns of the comparison was visualizations of similarities and differences between the two systems. Increasing or decreasing tendencies in the class through various editions were analyzed. Comparing the compositions of the main classes and distributions of concepts in the KDC and DDC discloses the differences in their knowledge structures empirically. This phase of quantitative analysis and visualizing techniques generates empirical evidence leading to interpretation.
  8. Vizine-Goetz, D.: DeweyBrowser (2006) 0.01
    0.007982934 = product of:
      0.031931736 = sum of:
        0.031931736 = product of:
          0.06386347 = sum of:
            0.06386347 = weight(_text_:classification in 5774) [ClassicSimilarity], result of:
              0.06386347 = score(doc=5774,freq=6.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.42661208 = fieldWeight in 5774, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5774)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The DeweyBrowser allows users to search and browse collections of library resources organized by the Dewey Decimal Classification (DDC) system. The visual interface provides access to several million records from the OCLC WorldCat database and to a collection of records derived from the abridged edition of DDC. The prototype was developed out of a desire to make the most of Dewey numbers assigned to library materials and to explore new ways of providing access to the DDC.
    Footnote
    Beitrag in einem Themenheft "Moving beyond the presentation layer: content and context in the Dewey Decimal Classification (DDC) System"
    Source
    Cataloging and classification quarterly. 42(2006) nos.3/4, S.213-220
  9. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.007960769 = product of:
      0.031843077 = sum of:
        0.031843077 = product of:
          0.063686155 = sum of:
            0.063686155 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.063686155 = score(doc=3406,freq=2.0), product of:
                0.16460574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04700564 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 5.2010 16:22:35
  10. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.01
    0.007960769 = product of:
      0.031843077 = sum of:
        0.031843077 = product of:
          0.063686155 = sum of:
            0.063686155 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.063686155 = score(doc=2755,freq=2.0), product of:
                0.16460574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04700564 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 2.2016 18:25:22
  11. Pfeffer, M.; Eckert, K.; Stuckenschmidt, H.: Visual analysis of classification systems and library collections (2008) 0.01
    0.0074491864 = product of:
      0.029796746 = sum of:
        0.029796746 = product of:
          0.05959349 = sum of:
            0.05959349 = weight(_text_:classification in 317) [ClassicSimilarity], result of:
              0.05959349 = score(doc=317,freq=4.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.39808834 = fieldWeight in 317, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=317)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In this demonstration we present a visual analysis approach that addresses both developers and users of hierarchical classification systems. The approach supports an intuitive understanding of the structure and current use in relation to a specific collection. We will also demonstrate its application for the development and management of library collections.
  12. Zhang, J.; Mostafa, J.; Tripathy, H.: Information retrieval by semantic analysis and visualization of the concept space of D-Lib® magazine (2002) 0.01
    0.0071917637 = product of:
      0.028767055 = sum of:
        0.028767055 = weight(_text_:subject in 1211) [ClassicSimilarity], result of:
          0.028767055 = score(doc=1211,freq=6.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.17111006 = fieldWeight in 1211, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1211)
      0.25 = coord(1/4)
    
    Abstract
    From the user's perspective, however, it is still difficult to use current information retrieval systems. Users frequently have problems expressing their information needs and translating those needs into queries. This is partly due to the fact that information needs cannot be expressed appropriately in systems terms. It is not unusual for users to input search terms that are different from the index terms information systems use. Various methods have been proposed to help users choose search terms and articulate queries. One widely used approach is to incorporate into the information system a thesaurus-like component that represents both the important concepts in a particular subject area and the semantic relationships among those concepts. Unfortunately, the development and use of thesauri is not without its own problems. The thesaurus employed in a specific information system has often been developed for a general subject area and needs significant enhancement to be tailored to the information system where it is to be used. This thesaurus development process, if done manually, is both time consuming and labor intensive. Usage of a thesaurus in searching is complex and may raise barriers for the user. For illustration purposes, let us consider two scenarios of thesaurus usage. In the first scenario the user inputs a search term and the thesaurus then displays a matching set of related terms. Without an overview of the thesaurus - and without the ability to see the matching terms in the context of other terms - it may be difficult to assess the quality of the related terms in order to select the correct term. In the second scenario the user browses the whole thesaurus, which is organized as in an alphabetically ordered list. The problem with this approach is that the list may be long, and neither does it show users the global semantic relationship among all the listed terms.
    Nevertheless, because thesaurus use has shown to improve retrieval, for our method we integrate functions in the search interface that permit users to explore built-in search vocabularies to improve retrieval from digital libraries. Our method automatically generates the terms and their semantic relationships representing relevant topics covered in a digital library. We call these generated terms the "concepts", and the generated terms and their semantic relationships we call the "concept space". Additionally, we used a visualization technique to display the concept space and allow users to interact with this space. The automatically generated term set is considered to be more representative of subject area in a corpus than an "externally" imposed thesaurus, and our method has the potential of saving a significant amount of time and labor for those who have been manually creating thesauri as well. Information visualization is an emerging discipline and developed very quickly in the last decade. With growing volumes of documents and associated complexities, information visualization has become increasingly important. Researchers have found information visualization to be an effective way to use and understand information while minimizing a user's cognitive load. Our work was based on an algorithmic approach of concept discovery and association. Concepts are discovered using an algorithm based on an automated thesaurus generation procedure. Subsequently, similarities among terms are computed using the cosine measure, and the associations among terms are established using a method known as max-min distance clustering. The concept space is then visualized in a spring embedding graph, which roughly shows the semantic relationships among concepts in a 2-D visual representation. The semantic space of the visualization is used as a medium for users to retrieve the desired documents. In the remainder of this article, we present our algorithmic approach of concept generation and clustering, followed by description of the visualization technique and interactive interface. The paper ends with key conclusions and discussions on future work.
  13. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.006908901 = product of:
      0.027635604 = sum of:
        0.027635604 = sum of:
          0.014898373 = weight(_text_:classification in 1789) [ClassicSimilarity], result of:
            0.014898373 = score(doc=1789,freq=4.0), product of:
              0.14969917 = queryWeight, product of:
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.04700564 = queryNorm
              0.099522084 = fieldWeight in 1789, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.1847067 = idf(docFreq=4974, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
          0.01273723 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.01273723 = score(doc=1789,freq=2.0), product of:
              0.16460574 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04700564 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
      0.25 = coord(1/4)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
  14. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.0067549367 = product of:
      0.027019747 = sum of:
        0.027019747 = product of:
          0.054039493 = sum of:
            0.054039493 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.054039493 = score(doc=3355,freq=4.0), product of:
                0.16460574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04700564 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  15. Visual thesaurus (2005) 0.01
    0.0066434667 = product of:
      0.026573867 = sum of:
        0.026573867 = weight(_text_:subject in 1292) [ClassicSimilarity], result of:
          0.026573867 = score(doc=1292,freq=2.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.15806471 = fieldWeight in 1292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.03125 = fieldNorm(doc=1292)
      0.25 = coord(1/4)
    
    Content
    Traditional print reference guides often have two methods of finding information: an order (alphabetical for dictionaries and encyclopedias, by subject hierarchy in the case of thesauri) and indices (ordered lists, with a more complete listing of words and concepts, which refers back to original content from the main body of the book). A user of such traditional print reference guides who is looking for information will either browse through the ordered information in the main body of the reference book, or scan through the indices to find what is necessary. The advent of the computer allows for much more rapid electronic searches of the same information, and for multiple layers of indices. Users can either search through information by entering a keyword, or users can browse through the information through an outline index, which represents the information contained in the main body of the data. There are two traditional user interfaces for such applications. First, the user may type text into a search field and in response, a list of results is returned to the user. The user then selects a returned entry and may page through the resulting information. Alternatively, the user may choose from a list of words from an index. For example, software thesaurus applications, in which a user attempts to find synonyms, antonyms, homonyms, etc. for a selected word, are usually implemented using the conventional search and presentation techniques discussed above. The presentation of results only allows for a one-dimensional order of data at any one time. In addition, only a limited number of results can be shown at once, and selecting a result inevitably leads to another page-if the result is not satisfactory, the users must search again. Finally, it is difficult to present information about the manner in which the search results are related, or to present quantitative information about the results without causing confusion. Therefore, there exists a need for a multidimensional graphical display of information, in particular with respect to information relating to the meaning of words and their relationships to other words. There further exists a need to present large amounts of information in a way that can be manipulated by the user, without the user losing his place. And there exists a need for more fluid, intuitive and powerful thesaurus functionality that invites the exploration of language.
  16. Linden, E.J. van der; Vliegen, R.; Wijk, J.J. van: Visual Universal Decimal Classification (2007) 0.01
    0.0065842126 = product of:
      0.02633685 = sum of:
        0.02633685 = product of:
          0.0526737 = sum of:
            0.0526737 = weight(_text_:classification in 548) [ClassicSimilarity], result of:
              0.0526737 = score(doc=548,freq=8.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.35186368 = fieldWeight in 548, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=548)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    UDC aims to be a consistent and complete classification system, that enables practitioners to classify documents swiftly and smoothly. The eventual goal of UDC is to enable the public at large to retrieve documents from large collections of documents that are classified with UDC. The large size of the UDC Master Reference File, MRF with over 66.000 records, makes it difficult to obtain an overview and to understand its structure. Moreover, finding the right classification in MRF turns out to be difficult in practice. Last but not least, retrieval of documents requires insight and understanding of the coding system. Visualization is an effective means to support the development of UDC as well as its use by practitioners. Moreover, visualization offers possibilities to use the classification without use of the coding system as such. MagnaView has developed an application which demonstrates the use of interactive visualization to face these challenges. In our presentation, we discuss these challenges, and we give a demonstration of the way the application helps face these. Examples of visualizations can be found below.
  17. Burnett, R.: How images think (2004) 0.01
    0.005753411 = product of:
      0.023013644 = sum of:
        0.023013644 = weight(_text_:subject in 3884) [ClassicSimilarity], result of:
          0.023013644 = score(doc=3884,freq=6.0), product of:
            0.16812018 = queryWeight, product of:
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.04700564 = queryNorm
            0.13688806 = fieldWeight in 3884, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.576596 = idf(docFreq=3361, maxDocs=44218)
              0.015625 = fieldNorm(doc=3884)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: JASIST 56(2005) no.10, S.1126-1128 (P.K. Nayar): "How Images Think is an exercise both in philosophical meditation and critical theorizing about media, images, affects, and cognition. Burnett combines the insights of neuroscience with theories of cognition and the computer sciences. He argues that contemporary metaphors - biological or mechanical - about either cognition, images, or computer intelligence severely limit our understanding of the image. He suggests in his introduction that "image" refers to the "complex set of interactions that constitute everyday life in image-worlds" (p. xviii). For Burnett the fact that increasing amounts of intelligence are being programmed into technologies and devices that use images as their main form of interaction and communication-computers, for instance-suggests that images are interfaces, structuring interaction, people, and the environment they share. New technologies are not simply extensions of human abilities and needs-they literally enlarge cultural and social preconceptions of the relationship between body and mind. The flow of information today is part of a continuum, with exceptional events standing as punctuation marks. This flow connects a variety of sources, some of which are continuous - available 24 hours - or "live" and radically alters issues of memory and history. Television and the Internet, notes Burnett, are not simply a simulated world-they are the world, and the distinctions between "natural" and "non-natural" have disappeared. Increasingly, we immerse ourselves in the image, as if we are there. We rarely become conscious of the fact that we are watching images of events-for all perceptioe, cognitive, and interpretive purposes, the image is the event for us. The proximity and distance of viewer from/with the viewed has altered so significantly that the screen is us. However, this is not to suggest that we are simply passive consumers of images. As Burnett points out, painstakingly, issues of creativity are involved in the process of visualization-viewwes generate what they see in the images. This involves the historical moment of viewing-such as viewing images of the WTC bombings-and the act of re-imagining. As Burnett puts it, "the questions about what is pictured and what is real have to do with vantage points [of the viewer] and not necessarily what is in the image" (p. 26). In his second chapter Burnett moves an to a discussion of "imagescapes." Analyzing the analogue-digital programming of images, Burnett uses the concept of "reverie" to describe the viewing experience. The reverie is a "giving in" to the viewing experience, a "state" in which conscious ("I am sitting down an this sofa to watch TV") and unconscious (pleasure, pain, anxiety) processes interact. Meaning emerges in the not-always easy or "clean" process of hybridization. This "enhances" the thinking process beyond the boundaries of either image or subject. Hybridization is the space of intelligence, exchange, and communication.
    The sixth chapter looks at this interfacing of humans and machines and begins with a series of questions. The crucial one, to my mind, is this: "Does the distinction between humans and technology contribute to a lack of understanding of the continuous interrelationship and interdependence that exists between humans and all of their creations?" (p. 125) Burnett suggests that to use biological or mechanical views of the computer/mind (the computer as an input/output device) Limits our understanding of the ways in which we interact with machines. He thus points to the role of language, the conversations (including the one we held with machines when we were children) that seem to suggest a wholly different kind of relationship. Peer-to-peer communication (P2P), which is arguably the most widely used exchange mode of images today, is the subject of chapter seven. The issue here is whether P2P affects community building or community destruction. Burnett argues that the trope of community can be used to explore the flow of historical events that make up a continuum-from 17th-century letter writing to e-mail. In the new media-and Burnett uses the example of popular music which can be sampled, and reedited to create new compositions - the interpretive space is more flexible. Private networks can be set up, and the process of information retrieval (about which Burnett has already expended considerable space in the early chapters) involves a lot more of visualization. P2P networks, as Burnett points out, are about information management. They are about the harmony between machines and humans, and constitute a new ecology of communications. Turning to computer games, Burnett looks at the processes of interaction, experience, and reconstruction in simulated artificial life worlds, animations, and video images. For Burnett (like Andrew Darley, 2000 and Richard Doyle, 2003) the interactivity of the new media games suggests a greater degree of engagement with imageworlds. Today many facets of looking, listening, and gazing can be turned into aesthetic forms with the new media. Digital technology literally reanimates the world, as Burnett demonstrates in bis concluding chapter. Burnett concludes that images no longer simply represent the world-they shape our very interaction with it; they become the foundation for our understanding the spaces, places, and historical moments that we inhabit. Burnett concludes his book with the suggestion that intelligence is now a distributed phenomenon (here closely paralleling Katherine Hayles' argument that subjectivity is dispersed through the cybernetic circuit, 1999). There is no one center of information or knowledge. Intersections of human creativity, work, and connectivity "spread" (Burnett's term) "intelligence through the use of mediated devices and images, as well as sounds" (p. 221).
    Burnett's work is a useful basic primer an the new media. One of the chief attractions here is his clear language, devoid of the jargon of either computer sciences or advanced critical theory. This makes How Images Think an accessible introduction to digital cultures. Burnett explores the impact of the new technologies an not just image-making but an image-effects, and the ways in which images constitute our ecologies of identity, communication, and subject-hood. While some of the sections seem a little too basic (especially where he speaks about the ways in which we constitute an object as an object of art, see above), especially in the wake of reception theory, it still remains a starting point for those interested in cultural studies of the new media. The Gase Burnett makes out for the transformation of the ways in which we look at images has been strengthened by his attention to the history of this transformation-from photography through television and cinema and now to immersive virtual reality systems. Joseph Koemer (2004) has pointed out that the iconoclasm of early modern Europe actually demonstrates how idolatory was integral to the image-breakers' core belief. As Koerner puts it, "images never go away ... they persist and function by being perpetually destroyed" (p. 12). Burnett, likewise, argues that images in new media are reformed to suit new contexts of meaning-production-even when they appear to be destroyed. Images are recast, and the degree of their realism (or fantasy) heightened or diminished-but they do not "go away." Images do think, but-if I can parse Burnett's entire work-they think with, through, and in human intelligence, emotions, and intuitions. Images are uncanny-they are both us and not-us, ours and not-ours. There is, surprisingly, one factual error. Burnett claims that Myron Kreuger pioneered the term "virtual reality." To the best of my knowledge, it was Jaron Lanier who did so (see Featherstone & Burrows, 1998 [1995], p. 5)."
  18. Trunk, D.: Semantische Netze in Informationssystemen : Verbesserung der Suche durch Interaktion und Visualisierung (2005) 0.01
    0.0055725384 = product of:
      0.022290153 = sum of:
        0.022290153 = product of:
          0.044580307 = sum of:
            0.044580307 = weight(_text_:22 in 2500) [ClassicSimilarity], result of:
              0.044580307 = score(doc=2500,freq=2.0), product of:
                0.16460574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04700564 = queryNorm
                0.2708308 = fieldWeight in 2500, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2500)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 1.2007 18:22:41
  19. Denton, W.: On dentographs, a new method of visualizing library collections (2012) 0.01
    0.00526737 = product of:
      0.02106948 = sum of:
        0.02106948 = product of:
          0.04213896 = sum of:
            0.04213896 = weight(_text_:classification in 580) [ClassicSimilarity], result of:
              0.04213896 = score(doc=580,freq=2.0), product of:
                0.14969917 = queryWeight, product of:
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.04700564 = queryNorm
                0.28149095 = fieldWeight in 580, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1847067 = idf(docFreq=4974, maxDocs=44218)
                  0.0625 = fieldNorm(doc=580)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    A dentograph is a visualization of a library's collection built on the idea that a classification scheme is a mathematical function mapping one set of things (books or the universe of knowledge) onto another (a set of numbers and letters). Dentographs can visualize aspects of just one collection or can be used to compare two or more collections. This article describes how to build them, with examples and code using Ruby and R, and discusses some problems and future directions.
  20. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.00
    0.004776461 = product of:
      0.019105844 = sum of:
        0.019105844 = product of:
          0.03821169 = sum of:
            0.03821169 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.03821169 = score(doc=1289,freq=2.0), product of:
                0.16460574 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04700564 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".

Years

Languages

  • e 49
  • d 4
  • a 1
  • More… Less…

Types

  • a 43
  • el 12
  • m 6
  • x 2
  • b 1
  • p 1
  • s 1
  • More… Less…