Search (67 results, page 1 of 4)

  • × theme_ss:"Visualisierung"
  1. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.02
    0.023821492 = product of:
      0.059553728 = sum of:
        0.024150565 = weight(_text_:it in 1397) [ClassicSimilarity], result of:
          0.024150565 = score(doc=1397,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.15977642 = fieldWeight in 1397, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.035403162 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
          0.035403162 = score(doc=1397,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.19345059 = fieldWeight in 1397, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
      0.4 = coord(2/5)
    
    Abstract
    The "Screen Design Manual" provides designers of interactive media with a practical working guide for preparing and presenting information that is suitable for both their target groups and the media they are using. It describes background information and relationships, clarifies them with the help of examples, and encourages further development of the language of digital media. In addition to the basics of the psychology of perception and learning, ergonomics, communication theory, imagery research, and aesthetics, the book also explores the design of navigation and orientation elements. Guidelines and checklists, along with the unique presentation of the book, support the application of information in practice.
    Date
    22. 3.2008 14:29:25
  2. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.02
    0.022694334 = product of:
      0.056735836 = sum of:
        0.035493936 = weight(_text_:it in 3035) [ClassicSimilarity], result of:
          0.035493936 = score(doc=3035,freq=12.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.23482242 = fieldWeight in 3035, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3035)
        0.021241898 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
          0.021241898 = score(doc=3035,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.116070345 = fieldWeight in 3035, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3035)
      0.4 = coord(2/5)
    
    Content
    Bill Howe and his colleagues at the University of Washington, in Seattle, decided to find out. First, they trained a computer algorithm to distinguish between various sorts of figures-which they defined as diagrams, equations, photographs, plots (such as bar charts and scatter graphs) and tables. They exposed their algorithm to between 400 and 600 images of each of these types of figure until it could distinguish them with an accuracy greater than 90%. Then they set it loose on the more-than-650,000 papers (containing more than 10m figures) stored on PubMed Central, an online archive of biomedical-research articles. To measure each paper's influence, they calculated its article-level Eigenfactor score-a modified version of the PageRank algorithm Google uses to provide the most relevant results for internet searches. Eigenfactor scoring gives a better measure than simply noting the number of times a paper is cited elsewhere, because it weights citations by their influence. A citation in a paper that is itself highly cited is worth more than one in a paper that is not.
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
    Dr Howe and his colleagues do, however, believe that the study of diagrams can result in new insights. A figure showing new metabolic pathways in a cell, for example, may summarise hundreds of experiments. Since illustrations can convey important scientific concepts in this way, they think that browsing through related figures from different papers may help researchers come up with new theories. As Dr Howe puts it, "the unit of scientific currency is closer to the figure than to the paper." With this thought in mind, the team have created a website (viziometrics.org (http://viziometrics.org/) ) where the millions of images sorted by their program can be searched using key words. Their next plan is to extract the information from particular types of scientific figure, to create comprehensive "super" figures: a giant network of all the known chemical processes in a cell for example, or the best-available tree of life. At just one such superfigure per paper, though, the citation records of articles containing such all-embracing diagrams may very well undermine the correlation that prompted their creation in the first place. Call it the ultimate marriage of chart and science.
  3. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.02
    0.019057194 = product of:
      0.047642983 = sum of:
        0.019320453 = weight(_text_:it in 2659) [ClassicSimilarity], result of:
          0.019320453 = score(doc=2659,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.12782113 = fieldWeight in 2659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.03125 = fieldNorm(doc=2659)
        0.02832253 = weight(_text_:22 in 2659) [ClassicSimilarity], result of:
          0.02832253 = score(doc=2659,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.15476047 = fieldWeight in 2659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.03125 = fieldNorm(doc=2659)
      0.4 = coord(2/5)
    
    Abstract
    The Library of Congress Subject Headings (LCSH) has been developed over the course of more than a century, predating the semantic web by some time. Until the 1986, the only concept-toconcept relationship available was an undifferentiated "See Also" reference, which was used for both associative (RT) and hierarchical (BT/NT) connections. In that year, in preparation for the first release of the headings in machine readable MARC Authorities form, an attempt was made to automatically convert these "See Also" links into the standardized thesaural relations. Unfortunately, the rule used to determine the type of reference to generate relied on the presence of symmetric links to detect associatively related terms; "See Also" references that were only present in one of the related terms were assumed to be hierarchical. This left the process vulnerable to inconsistent use of references in the pre-conversion data, with a marked bias towards promoting relationships to hierarchical status. The Library of Congress was aware that the results of the conversion contained many inconsistencies, and intended to validate and correct the results over the course of time. Unfortunately, twenty years later, less than 40% of the converted records have been evaluated. The converted records, being the earliest encountered during the Library's cataloging activities, represent the most basic concepts within LCSH; errors in the syndetic structure for these records affect far more subordinate concepts than those nearer the periphery. Worse, a policy of patterning new headings after pre-existing ones leads to structural errors arising from the conversion process being replicated in these newer headings, perpetuating and exacerbating the errors. As the LCSH prepares for its second great conversion, from MARC to SKOS, it is critical to address these structural problems. As part of the work on converting the headings into SKOS, I have experimented with different visualizations of the tangled web of broader terms embedded in LCSH. This poster illustrates several of these renderings, shows how they can help users to judge which relationships might not be correct, and shows just exactly how Doorbells and Mammals are related.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  4. Information visualization in data mining and knowledge discovery (2002) 0.02
    0.01588793 = product of:
      0.03971982 = sum of:
        0.025558555 = weight(_text_:it in 1789) [ClassicSimilarity], result of:
          0.025558555 = score(doc=1789,freq=14.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.16909146 = fieldWeight in 1789, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
        0.014161265 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
          0.014161265 = score(doc=1789,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.07738023 = fieldWeight in 1789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.015625 = fieldNorm(doc=1789)
      0.4 = coord(2/5)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  5. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.014161265 = product of:
      0.070806324 = sum of:
        0.070806324 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
          0.070806324 = score(doc=3406,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.38690117 = fieldWeight in 3406, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=3406)
      0.2 = coord(1/5)
    
    Date
    30. 5.2010 16:22:35
  6. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.01
    0.014161265 = product of:
      0.070806324 = sum of:
        0.070806324 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
          0.070806324 = score(doc=2755,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.38690117 = fieldWeight in 2755, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=2755)
      0.2 = coord(1/5)
    
    Date
    1. 2.2016 18:25:22
  7. Wattenberg, M.; Viégas, F.; Johnson, I.: How to use t-SNE effectively (2016) 0.01
    0.013385601 = product of:
      0.06692801 = sum of:
        0.06692801 = weight(_text_:it in 3887) [ClassicSimilarity], result of:
          0.06692801 = score(doc=3887,freq=6.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.44278538 = fieldWeight in 3887, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0625 = fieldNorm(doc=3887)
      0.2 = coord(1/5)
    
    Abstract
    Although extremely useful for visualizing high-dimensional data, t-SNE plots can sometimes be mysterious or misleading. By exploring how it behaves in simple cases, we can learn to use it more effectively. We'll walk through a series of simple examples to illustrate what t-SNE diagrams can and cannot show. The t-SNE technique really is useful-but only if you know how to interpret it.
  8. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.01
    0.012815738 = product of:
      0.06407869 = sum of:
        0.06407869 = weight(_text_:it in 79) [ClassicSimilarity], result of:
          0.06407869 = score(doc=79,freq=22.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.42393473 = fieldWeight in 79, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.2 = coord(1/5)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
  9. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.012016232 = product of:
      0.06008116 = sum of:
        0.06008116 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
          0.06008116 = score(doc=3355,freq=4.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.32829654 = fieldWeight in 3355, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
      0.2 = coord(1/5)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  10. Trunk, D.: Semantische Netze in Informationssystemen : Verbesserung der Suche durch Interaktion und Visualisierung (2005) 0.01
    0.009912886 = product of:
      0.04956443 = sum of:
        0.04956443 = weight(_text_:22 in 2500) [ClassicSimilarity], result of:
          0.04956443 = score(doc=2500,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.2708308 = fieldWeight in 2500, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2500)
      0.2 = coord(1/5)
    
    Date
    30. 1.2007 18:22:41
  11. Gelernter, J.: Visual classification with information visualization (Infoviz) for digital library collections (2007) 0.01
    0.009563136 = product of:
      0.04781568 = sum of:
        0.04781568 = weight(_text_:it in 423) [ClassicSimilarity], result of:
          0.04781568 = score(doc=423,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.31634116 = fieldWeight in 423, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0546875 = fieldNorm(doc=423)
      0.2 = coord(1/5)
    
    Abstract
    The purpose of information visualization (infoviz) is to show information graphically. That purpose is often obscured by infoviz designs that are not well understood in practice. This paper offers an overview of infoviz culled from the literature on applications of information visualization for the digital library: how the clustering works that creates the topics and those topics are represented graphically. It presents a taxonomy of infoviz designs in one, two and three dimensions. It is suggested that user evaluations of infoviz designs might be used to enrich infoviz theory and, whether through application of the theory or through application of user remarks, developers might improve infoviz interface comprehensibility. Design recommendations are made in an effort to improve weaknesses and capitalize on strengths of present interfaces in representing knowledge visually.
  12. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.01
    0.008496759 = product of:
      0.042483795 = sum of:
        0.042483795 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
          0.042483795 = score(doc=1289,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.23214069 = fieldWeight in 1289, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=1289)
      0.2 = coord(1/5)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  13. Thissen, F.: Screen-Design-Handbuch : Effektiv informieren und kommunizieren mit Multimedia (2001) 0.01
    0.008496759 = product of:
      0.042483795 = sum of:
        0.042483795 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
          0.042483795 = score(doc=1781,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.23214069 = fieldWeight in 1781, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=1781)
      0.2 = coord(1/5)
    
    Date
    22. 3.2008 14:35:21
  14. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.01
    0.008496759 = product of:
      0.042483795 = sum of:
        0.042483795 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
          0.042483795 = score(doc=3693,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.23214069 = fieldWeight in 3693, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=3693)
      0.2 = coord(1/5)
    
    Date
    22. 7.2010 19:36:46
  15. Jäger-Dengler-Harles, I.: Informationsvisualisierung und Retrieval im Fokus der Infromationspraxis (2013) 0.01
    0.008496759 = product of:
      0.042483795 = sum of:
        0.042483795 = weight(_text_:22 in 1709) [ClassicSimilarity], result of:
          0.042483795 = score(doc=1709,freq=2.0), product of:
            0.18300882 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.052260913 = queryNorm
            0.23214069 = fieldWeight in 1709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=1709)
      0.2 = coord(1/5)
    
    Date
    4. 2.2015 9:22:39
  16. Hoeber, O.; Yang, X.D.: HotMap : supporting visual exploration of Web search results (2009) 0.01
    0.008366001 = product of:
      0.041830003 = sum of:
        0.041830003 = weight(_text_:it in 2700) [ClassicSimilarity], result of:
          0.041830003 = score(doc=2700,freq=6.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.27674085 = fieldWeight in 2700, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2700)
      0.2 = coord(1/5)
    
    Abstract
    Although information retrieval techniques used by Web search engines have improved substantially over the years, the results of Web searches have continued to be represented in simple list-based formats. Although the list-based representation makes it easy to evaluate a single document for relevance, it does not support the users in the broader tasks of manipulating or exploring the search results as they attempt to find a collection of relevant documents. HotMap is a meta-search system that provides a compact visual representation of Web search results at two levels of detail, and it supports interactive exploration via nested sorting of Web search results based on query term frequencies. An evaluation of the search results for a set of vague queries has shown that the re-sorted search results can provide a higher portion of relevant documents among the top search results. User studies show an increase in speed and effectiveness and a reduction in missed documents when comparing HotMap to the list-based representation used by Google. Subjective measures were positive, and users showed a preference for the HotMap interface. These results provide evidence for the utility of next-generation Web search results interfaces that promote interactive search results exploration.
  17. Osiñska, V.: Visual analysis of classification scheme (2010) 0.01
    0.008366001 = product of:
      0.041830003 = sum of:
        0.041830003 = weight(_text_:it in 4068) [ClassicSimilarity], result of:
          0.041830003 = score(doc=4068,freq=6.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.27674085 = fieldWeight in 4068, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4068)
      0.2 = coord(1/5)
    
    Abstract
    This paper proposes a novel methodology to visualize a classification scheme. It is demonstrated with the Association for Computing Machinery (ACM) Computing Classification System (CCS). The collection derived from the ACM digital library, containing 37,543 documents classified by CCS. The assigned classes, subject descriptors, and keywords were processed in a dataset to produce a graphical representation of the documents. The general conception is based on the similarity of co-classes (themes) proportional to the number of common publications. The final number of all possible classes and subclasses in the collection was 353 and therefore the similarity matrix of co-classes had the same dimension. A spherical surface was chosen as the target information space. Classes and documents' node locations on the sphere were obtained by means of Multidimensional Scaling coordinates. By representing the surface on a plane like a map projection, it is possible to analyze the visualization layout. The graphical patterns were organized in some colour clusters. For evaluation of given visualization maps, graphics filtering was applied. This proposed method can be very useful in interdisciplinary research fields. It allows for a great amount of heterogeneous information to be conveyed in a compact display, including topics, relationships among topics, frequency of occurrence, importance and changes of these properties over time.
  18. Barton, P.: ¬A missed opportunity : why the benefits of information visualisation seem still out of sight (2005) 0.01
    0.008196974 = product of:
      0.04098487 = sum of:
        0.04098487 = weight(_text_:it in 1293) [ClassicSimilarity], result of:
          0.04098487 = score(doc=1293,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.27114958 = fieldWeight in 1293, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=1293)
      0.2 = coord(1/5)
    
    Abstract
    This paper aims to identify what information visualisation is and how in conjunction with the computer it can be used as a tool to expand understanding. It also seeks to explain how information visualisation has been fundamental to the development of the computer from its very early days to Apple's launch of the now ubiquitous W.I.M.P (Windows, Icon, Menu, Program) graphical user interface in 1984. An attempt is also made to question why after many years of progress and development though the late 1960s and 1970s, very little has changed in the way we interact with the data on our computers since the watershed of the Macintosh and in conclusion where the future of information visualisation may lie.
  19. Bornmann, L.; Haunschild, R.: Overlay maps based on Mendeley data : the use of altmetrics for readership networks (2016) 0.01
    0.008196974 = product of:
      0.04098487 = sum of:
        0.04098487 = weight(_text_:it in 3230) [ClassicSimilarity], result of:
          0.04098487 = score(doc=3230,freq=4.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.27114958 = fieldWeight in 3230, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.046875 = fieldNorm(doc=3230)
      0.2 = coord(1/5)
    
    Abstract
    Visualization of scientific results using networks has become popular in scientometric research. We provide base maps for Mendeley reader count data using the publication year 2012 from the Web of Science data. Example networks are shown and explained. The reader can use our base maps to visualize other results with the VOSViewer. The proposed overlay maps are able to show the impact of publications in terms of readership data. The advantage of using our base maps is that it is not necessary for the user to produce a network based on all data (e.g., from 1 year), but can collect the Mendeley data for a single institution (or journals, topics) and can match them with our already produced information. Generation of such large-scale networks is still a demanding task despite the available computer power and digital data availability. Therefore, it is very useful to have base maps and create the network with the overlay technique.
  20. Collins, C.: WordNet explorer : applying visualization principles to lexical semantics (2006) 0.01
    0.0077281813 = product of:
      0.038640905 = sum of:
        0.038640905 = weight(_text_:it in 1288) [ClassicSimilarity], result of:
          0.038640905 = score(doc=1288,freq=2.0), product of:
            0.15115225 = queryWeight, product of:
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.052260913 = queryNorm
            0.25564227 = fieldWeight in 1288, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.892262 = idf(docFreq=6664, maxDocs=44218)
              0.0625 = fieldNorm(doc=1288)
      0.2 = coord(1/5)
    
    Abstract
    Interface designs for lexical databases in NLP have suffered from not following design principles developed in the information visualization research community. We present a design paradigm and show it can be used to generate visualizations which maximize the usability and utility ofWordNet. The techniques can be generally applied to other lexical databases used in NLP research.

Authors

Years

Languages

  • e 60
  • d 6
  • a 1
  • More… Less…

Types

  • a 50
  • el 19
  • m 9
  • s 2
  • x 2
  • r 1
  • More… Less…

Subjects