Search (34 results, page 1 of 2)

  • × theme_ss:"Visualisierung"
  1. Batorowska, H.; Kaminska-Czubala, B.: Information retrieval support : visualisation of the information space of a document (2014) 0.03
    0.032975927 = product of:
      0.04946389 = sum of:
        0.033869784 = product of:
          0.10160935 = sum of:
            0.10160935 = weight(_text_:objects in 1444) [ClassicSimilarity], result of:
              0.10160935 = score(doc=1444,freq=4.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.33219194 = fieldWeight in 1444, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1444)
          0.33333334 = coord(1/3)
        0.015594108 = product of:
          0.031188216 = sum of:
            0.031188216 = weight(_text_:22 in 1444) [ClassicSimilarity], result of:
              0.031188216 = score(doc=1444,freq=2.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.15476047 = fieldWeight in 1444, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1444)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Acquiring knowledge in any field involves information retrieval, i.e. searching the available documents to identify answers to the queries concerning the selected objects. Knowing the keywords which are names of the objects will enable situating the user's query in the information space organized as a thesaurus or faceted classification. Objectives: Identification the areas in the information space which correspond to gaps in the user's personal knowledge or in the domain knowledge might become useful in theory or practice. The aim of this paper is to present a realistic information-space model of a self-authored full-text document on information culture, indexed by the author of this article. Methodology: Having established the relations between the terms, particular modules (sets of terms connected by relations used in facet classification) are situated on a plain, similarly to a communication map. Conclusions drawn from the "journey" on the map, which is a visualization of the knowledge contained in the analysed document, are the crucial part of this paper. Results: The direct result of the research is the created model of information space visualization of a given document (book, article, website). The proposed procedure can practically be used as a new form of representation in order to map the contents of academic books and articles, beside the traditional index form, especially as an e-book auxiliary tool. In teaching, visualization of the information space of a document can be used to help students understand the issues of: classification, categorization and representation of new knowledge emerging in human mind.
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  2. Eckert, K.; Pfeffer, M.; Stuckenschmidt, H.: Assessing thesaurus-based annotations for semantic search applications (2008) 0.02
    0.024304472 = product of:
      0.072913416 = sum of:
        0.072913416 = product of:
          0.14582683 = sum of:
            0.14582683 = weight(_text_:indexing in 1528) [ClassicSimilarity], result of:
              0.14582683 = score(doc=1528,freq=10.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.6619802 = fieldWeight in 1528, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1528)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Statistical methods for automated document indexing are becoming an alternative to the manual assignment of keywords. We argue that the quality of the thesaurus used as a basis for indexing in regard to its ability to adequately cover the contents to be indexed and as a basis for the specific indexing method used is of crucial importance in automatic indexing. We present an interactive tool for thesaurus evaluation that is based on a combination of statistical measures and appropriate visualisation techniques that supports the detection of potential problems in a thesaurus. We describe the methods used and show that the tool supports the detection and correction of errors, leading to a better indexing result.
  3. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.02
    0.016934892 = product of:
      0.050804675 = sum of:
        0.050804675 = product of:
          0.15241402 = sum of:
            0.15241402 = weight(_text_:objects in 3884) [ClassicSimilarity], result of:
              0.15241402 = score(doc=3884,freq=4.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.49828792 = fieldWeight in 3884, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3884)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
  4. Buchel, O.; Sedig, K.: Extending map-based visualizations to support visual tasks : the role of ontological properties (2011) 0.01
    0.0139705725 = product of:
      0.041911718 = sum of:
        0.041911718 = product of:
          0.12573515 = sum of:
            0.12573515 = weight(_text_:objects in 2307) [ClassicSimilarity], result of:
              0.12573515 = score(doc=2307,freq=2.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.41106653 = fieldWeight in 2307, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2307)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Map-based visualizations of document collections have become popular in recent times. However, most of these visualizations emphasize only geospatial properties of objects, leaving out other ontological properties. In this paper we propose to extend these visualizations to include nongeospatial properties of documents to support users with elementary and synoptic visual tasks. More specifically, additional suitable representations that can enhance the utility of map-based visualizations are discussed. To demonstrate the utility of the proposed solution, we have developed a prototype map-based visualization system using Google Maps (GM), which demonstrates how additional representations can be beneficial.
  5. Maaten, L. van den: Accelerating t-SNE using Tree-Based Algorithms (2014) 0.01
    0.0139705725 = product of:
      0.041911718 = sum of:
        0.041911718 = product of:
          0.12573515 = sum of:
            0.12573515 = weight(_text_:objects in 3886) [ClassicSimilarity], result of:
              0.12573515 = score(doc=3886,freq=2.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.41106653 = fieldWeight in 3886, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3886)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper investigates the acceleration of t-SNE-an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots-using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N*logN). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant.
  6. Eckert, K.: Thesaurus analysis and visualization in semantic search applications (2007) 0.01
    0.013447259 = product of:
      0.040341776 = sum of:
        0.040341776 = product of:
          0.08068355 = sum of:
            0.08068355 = weight(_text_:indexing in 3222) [ClassicSimilarity], result of:
              0.08068355 = score(doc=3222,freq=6.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.3662626 = fieldWeight in 3222, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3222)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The use of thesaurus-based indexing is a common approach for increasing the performance of information retrieval. In this thesis, we examine the suitability of a thesaurus for a given set of information and evaluate improvements of existing thesauri to get better search results. On this area, we focus on two aspects: 1. We demonstrate an analysis of the indexing results achieved by an automatic document indexer and the involved thesaurus. 2. We propose a method for thesaurus evaluation which is based on a combination of statistical measures and appropriate visualization techniques that support the detection of potential problems in a thesaurus. In this chapter, we give an overview of the context of our work. Next, we briefly outline the basics of thesaurus-based information retrieval and describe the Collexis Engine that was used for our experiments. In Chapter 3, we describe two experiments in automatically indexing documents in the areas of medicine and economics with corresponding thesauri and compare the results to available manual annotations. Chapter 4 describes methods for assessing thesauri and visualizing the result in terms of a treemap. We depict examples of interesting observations supported by the method and show that we actually find critical problems. We conclude with a discussion of open questions and future research in Chapter 5.
  7. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.01299509 = product of:
      0.03898527 = sum of:
        0.03898527 = product of:
          0.07797054 = sum of:
            0.07797054 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.07797054 = score(doc=3406,freq=2.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    30. 5.2010 16:22:35
  8. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.01
    0.01299509 = product of:
      0.03898527 = sum of:
        0.03898527 = product of:
          0.07797054 = sum of:
            0.07797054 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.07797054 = score(doc=2755,freq=2.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 2.2016 18:25:22
  9. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.011026701 = product of:
      0.0330801 = sum of:
        0.0330801 = product of:
          0.0661602 = sum of:
            0.0661602 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.0661602 = score(doc=3355,freq=4.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  10. Chowdhury, S.; Chowdhury, G.G.: Using DDC to create a visual knowledge map as an aid to online information retrieval (2004) 0.01
    0.010757807 = product of:
      0.03227342 = sum of:
        0.03227342 = product of:
          0.06454684 = sum of:
            0.06454684 = weight(_text_:indexing in 2643) [ClassicSimilarity], result of:
              0.06454684 = score(doc=2643,freq=6.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.2930101 = fieldWeight in 2643, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2643)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    1. Introduction Web search engines and digital libraries usually expect the users to use search terms that most accurately represent their information needs. Finding the most appropriate search terms to represent an information need is an age old problem in information retrieval. Keyword or phrase search may produce good search results as long as the search terms or phrase(s) match those used by the authors and have been chosen for indexing by the concerned information retrieval system. Since this does not always happen, a large number of false drops are produced by information retrieval systems. The retrieval results become worse in very large systems that deal with millions of records, such as the Web search engines and digital libraries. Vocabulary control tools are used to improve the performance of text retrieval systems. Thesauri, the most common type of vocabulary control tool used in information retrieval, appeared in the late fifties, designed for use with the emerging post-coordinate indexing systems of that time. They are used to exert terminology control in indexing, and to aid in searching by allowing the searcher to select appropriate search terms. A large volume of literature exists describing the design features, and experiments with the use, of thesauri in various types of information retrieval systems (see for example, Furnas et.al., 1987; Bates, 1986, 1998; Milstead, 1997, and Shiri et al., 2002).
  11. Maaten, L. van den; Hinton, G.: Visualizing data using t-SNE (2008) 0.01
    0.009978982 = product of:
      0.029936943 = sum of:
        0.029936943 = product of:
          0.089810826 = sum of:
            0.089810826 = weight(_text_:objects in 3888) [ClassicSimilarity], result of:
              0.089810826 = score(doc=3888,freq=2.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.29361898 = fieldWeight in 3888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3888)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a new technique called "t-SNE" that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large data sets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of data sets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the data sets.
  12. Zhu, B.; Chen, H.: Information visualization (2004) 0.01
    0.009878688 = product of:
      0.02963606 = sum of:
        0.02963606 = product of:
          0.08890818 = sum of:
            0.08890818 = weight(_text_:objects in 4276) [ClassicSimilarity], result of:
              0.08890818 = score(doc=4276,freq=4.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.29066795 = fieldWeight in 4276, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4276)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
    Visualization can be classified as scientific visualization, software visualization, or information visualization. Although the data differ, the underlying techniques have much in common. They use the same elements (visual cues) and follow the same rules of combining visual cues to deliver patterns. They all involve understanding human perception (Encarnacao, Foley, Bryson, & Feiner, 1994) and require domain knowledge (Tufte, 1990). Because most decisions are based an unstructured information, such as text documents, Web pages, or e-mail messages, this chapter focuses an the visualization of unstructured textual documents. The chapter reviews information visualization techniques developed over the last decade and examines how they have been applied in different domains. The first section provides the background by describing visualization history and giving overviews of scientific, software, and information visualization as well as the perceptual aspects of visualization. The next section assesses important visualization techniques that convert abstract information into visual objects and facilitate navigation through displays an a computer screen. It also explores information analysis algorithms that can be applied to identify or extract salient visualizable structures from collections of information. Information visualization systems that integrate different types of technologies to address problems in different domains are then surveyed; and we move an to a survey and critique of visualization system evaluation studies. The chapter concludes with a summary and identification of future research directions.
  13. Slavic, A.: Interface to classification : some objectives and options (2006) 0.01
    0.009316535 = product of:
      0.027949603 = sum of:
        0.027949603 = product of:
          0.055899207 = sum of:
            0.055899207 = weight(_text_:indexing in 2131) [ClassicSimilarity], result of:
              0.055899207 = score(doc=2131,freq=2.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.2537542 = fieldWeight in 2131, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2131)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This is a preprint to be published in the Extensions & Corrections to the UDC. The paper explains the basic functions of browsing and searching that need to be supported in relation to analytico-synthetic classifications such as Universal Decimal Classification (UDC), irrespective of any specific, real-life implementation. UDC is an example of a semi-faceted system that can be used, for instance, for both post-coordinate searching and hierarchical/facet browsing. The advantages of using a classification for IR, however, depend on the strength of the GUI, which should provide a user-friendly interface to classification browsing and searching. The power of this interface is in supporting visualisation that will 'convert' what is potentially a user-unfriendly indexing language based on symbols, to a subject presentation that is easy to understand, search and navigate. A summary of the basic functions of searching and browsing a classification that may be provided on a user-friendly interface is given and examples of classification browsing interfaces are provided.
  14. Trunk, D.: Semantische Netze in Informationssystemen : Verbesserung der Suche durch Interaktion und Visualisierung (2005) 0.01
    0.009096563 = product of:
      0.027289689 = sum of:
        0.027289689 = product of:
          0.054579377 = sum of:
            0.054579377 = weight(_text_:22 in 2500) [ClassicSimilarity], result of:
              0.054579377 = score(doc=2500,freq=2.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.2708308 = fieldWeight in 2500, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2500)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    30. 1.2007 18:22:41
  15. Rolling, L.: ¬The role of graphic display of concept relationships in indexing and retrieval vocabularies (1985) 0.01
    0.008783713 = product of:
      0.026351137 = sum of:
        0.026351137 = product of:
          0.052702274 = sum of:
            0.052702274 = weight(_text_:indexing in 3646) [ClassicSimilarity], result of:
              0.052702274 = score(doc=3646,freq=4.0), product of:
                0.2202888 = queryWeight, product of:
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.057548698 = queryNorm
                0.23924173 = fieldWeight in 3646, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.8278677 = idf(docFreq=2614, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3646)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The use of diagrams to express relationships in classification is not new. Many classificationists have used this approach, but usually in a minor display to make a point or for part of a difficult relational situation. Ranganathan, for example, used diagrams for some of his more elusive concepts. The thesaurus in particular and subject headings in general, with direct and indirect crossreferences or equivalents, need many more diagrams than normally are included to make relationships and even semantics clear. A picture very often is worth a thousand words. Rolling has used directed graphs (arrowgraphs) to join terms as a practical method for rendering relationships between indexing terms lucid. He has succeeded very weIl in this endeavor. Four diagrams in this selection are all that one needs to explain how to employ the system; from initial listing to completed arrowgraph. The samples of his work include illustration of off-page connectors between arrowgraphs. The great advantage to using diagrams like this is that they present relations between individual terms in a format that is easy to comprehend. But of even greater value is the fact that one can use his arrowgraphs as schematics for making three-dimensional wire-and-ball models, in which the relationships may be seen even more clearly. In fact, errors or gaps in relations are much easier to find with this methodology. One also can get across the notion of the threedimensionality of classification systems with such models. Pettee's "hand reaching up and over" (q.v.) is not a figment of the imagination. While the actual hand is a wire or stick, the concept visualized is helpful in illuminating the three-dimensional figure that is latent in all systems that have cross-references or "broader," "narrower," or, especially, "related" terms. Classification schedules, being hemmed in by the dimensions of the printed page, also benefit from such physical illustrations. Rolling, an engineer by conviction, was the developer of information systems for the Cobalt Institute, the European Atomic Energy Community, and European Coal and Steel Community. He also developed and promoted computer-aided translation at the Commission of the European Communities in Luxembourg. One of his objectives has always been to increase the efficiency of mono- and multilingual thesauri for use in multinational information systems.
  16. Heo, M.; Hirtle, S.C.: ¬An empirical comparison of visualization tools to assist information retrieval on the Web (2001) 0.01
    0.007983185 = product of:
      0.023949554 = sum of:
        0.023949554 = product of:
          0.07184866 = sum of:
            0.07184866 = weight(_text_:objects in 5215) [ClassicSimilarity], result of:
              0.07184866 = score(doc=5215,freq=2.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.23489517 = fieldWeight in 5215, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5215)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The reader of a hypertext document in a web environment, if maximum use of the document is to be obtained, must visualize the overall structure of the paths through the document as well as the document space. Graphic visualization displays of this space, produced to assist in navigation, are classified into four groups, and Heo and Hirtle compare three of these classes as to their effectiveness. Distortion displays expand regions of interest while relatively diminishing the detail of the remaining regions. This technique will show both local detail and global structure. Zoom techniques use a series of increasingly focused displays of smaller and smaller areas, and can reduce cogitative overload, but do not provide an easy movement to other parts of the total space. Expanding outline displays use a tree structure to allow movement through a hierarchy of documents, but if the organization has a wide horizontal structure, or is not particularly hierarchical in nature such display can break down. Three dimensional layouts, which are not evaluated here, place objects by location in three space, providing more information and freedom. However, the space must be represented in two dimensions resulting in difficulty in visually judging depth, size and positioning. Ten students were assigned to each of eight groups composed of viewers of the three techniques and an unassisted control group using either a large (583 selected pages) or a small (50 selected pages) web space. Sets of 10 questions, which were designed to elicit the use of a visualization tool, were provided for each space. Accuracy and time spent were extracted from a log file. Users views were also surveyed after completion. ANOVA shows significant differences in accuracy and time based upon the visualization tool in use. A Tukey test shows zoom accuracy to be significantly less than expanding outline and zoom time to be significantly greater than both the outline and control groups. Size significantly affected accuracy and time, but had no interaction with tool type. While the expanding tool class out performed zoom and distortion, its performance was not significantly different from the control group.
  17. Boyack, K.W.; Wylie, B.N.; Davidson, G.S.: Domain visualization using VxInsight®) [register mark] for science and technology management (2002) 0.01
    0.007983185 = product of:
      0.023949554 = sum of:
        0.023949554 = product of:
          0.07184866 = sum of:
            0.07184866 = weight(_text_:objects in 5244) [ClassicSimilarity], result of:
              0.07184866 = score(doc=5244,freq=2.0), product of:
                0.30587542 = queryWeight, product of:
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.057548698 = queryNorm
                0.23489517 = fieldWeight in 5244, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.315071 = idf(docFreq=590, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5244)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    Boyack, Wylie, and Davidson developed VxInsight which transforms information from documents into a landscape representation which conveys information on the implicit structure of the data as context for queries and exploration. From a list of pre-computed similarities it creates on a plane an x,y location for each item, or can compute its own similarities based on direct and co-citation linkages. Three-dimensional overlays are then generated on the plane to show the extent of clustering at particular points. Metadata associated with clustered objects provides a label for each peak from common words. Clicking on an object will provide citation information and answer sets for queries run will be displayed as markers on the landscape. A time slider allows a view of terrain changes over time. In a test on the microsystems engineering literature a review article was used to provide seed terms to search Science Citation Index and retrieve 20,923 articles of which 13,433 were connected by citation to at least one other article in the set. The citation list was used to calculate similarity measures and x.y coordinates for each article. Four main categories made up the landscape with 90% of the articles directly related to one or more of the four. A second test used five databases: SCI, Cambridge Scientific Abstracts, Engineering Index, INSPEC, and Medline to extract 17,927 unique articles by Sandia, Los Alamos National Laboratory, and Lawrence Livermore National Laboratory, with text of abstracts and RetrievalWare 6.6 utilized to generate the similarity measures. The subsequent map revealed that despite some overlap the laboratories generally publish in different areas. A third test on 3000 physical science journals utilized 4.7 million articles from SCI where similarity was the un-normalized sum of cites between journals in both directions. Physics occupies a central position, with engineering, mathematics, computing, and materials science strongly linked. Chemistry is farther removed but strongly connected.
  18. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.01
    0.007797054 = product of:
      0.023391161 = sum of:
        0.023391161 = product of:
          0.046782322 = sum of:
            0.046782322 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.046782322 = score(doc=1289,freq=2.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  19. Thissen, F.: Screen-Design-Handbuch : Effektiv informieren und kommunizieren mit Multimedia (2001) 0.01
    0.007797054 = product of:
      0.023391161 = sum of:
        0.023391161 = product of:
          0.046782322 = sum of:
            0.046782322 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.046782322 = score(doc=1781,freq=2.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.23214069 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 3.2008 14:35:21
  20. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.01
    0.007797054 = product of:
      0.023391161 = sum of:
        0.023391161 = product of:
          0.046782322 = sum of:
            0.046782322 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.046782322 = score(doc=3693,freq=2.0), product of:
                0.20152573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.057548698 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2010 19:36:46

Years

Languages

  • e 29
  • d 4
  • a 1
  • More… Less…

Types

  • a 24
  • el 7
  • m 5
  • x 3
  • p 1
  • s 1
  • More… Less…