Search (26 results, page 1 of 2)

  • × theme_ss:"Visualisierung"
  1. Information visualization in data mining and knowledge discovery (2002) 0.05
    0.053167764 = product of:
      0.10633553 = sum of:
        0.10633553 = sum of:
          0.09252849 = weight(_text_:discovery in 1789) [ClassicSimilarity], result of:
            0.09252849 = score(doc=1789,freq=18.0), product of:
              0.26668423 = queryWeight, product of:
                5.2338576 = idf(docFreq=640, maxDocs=44218)
                0.050953664 = queryNorm
              0.346959 = fieldWeight in 1789, product of:
                4.2426405 = tf(freq=18.0), with freq of:
                  18.0 = termFreq=18.0
                5.2338576 = idf(docFreq=640, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
          0.013807036 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.013807036 = score(doc=1789,freq=2.0), product of:
              0.17843105 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.050953664 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
      0.5 = coord(1/2)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    Rez. in: JASIST 54(2003) no.9, S.905-906 (C.A. Badurek): "Visual approaches for knowledge discovery in very large databases are a prime research need for information scientists focused an extracting meaningful information from the ever growing stores of data from a variety of domains, including business, the geosciences, and satellite and medical imagery. This work presents a summary of research efforts in the fields of data mining, knowledge discovery, and data visualization with the goal of aiding the integration of research approaches and techniques from these major fields. The editors, leading computer scientists from academia and industry, present a collection of 32 papers from contributors who are incorporating visualization and data mining techniques through academic research as well application development in industry and government agencies. Information Visualization focuses upon techniques to enhance the natural abilities of humans to visually understand data, in particular, large-scale data sets. It is primarily concerned with developing interactive graphical representations to enable users to more intuitively make sense of multidimensional data as part of the data exploration process. It includes research from computer science, psychology, human-computer interaction, statistics, and information science. Knowledge Discovery in Databases (KDD) most often refers to the process of mining databases for previously unknown patterns and trends in data. Data mining refers to the particular computational methods or algorithms used in this process. The data mining research field is most related to computational advances in database theory, artificial intelligence and machine learning. This work compiles research summaries from these main research areas in order to provide "a reference work containing the collection of thoughts and ideas of noted researchers from the fields of data mining and data visualization" (p. 8). It addresses these areas in three main sections: the first an data visualization, the second an KDD and model visualization, and the last an using visualization in the knowledge discovery process. The seven chapters of Part One focus upon methodologies and successful techniques from the field of Data Visualization. Hoffman and Grinstein (Chapter 2) give a particularly good overview of the field of data visualization and its potential application to data mining. An introduction to the terminology of data visualization, relation to perceptual and cognitive science, and discussion of the major visualization display techniques are presented. Discussion and illustration explain the usefulness and proper context of such data visualization techniques as scatter plots, 2D and 3D isosurfaces, glyphs, parallel coordinates, and radial coordinate visualizations. Remaining chapters present the need for standardization of visualization methods, discussion of user requirements in the development of tools, and examples of using information visualization in addressing research problems.
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  2. Kraker, P.; Kittel, C,; Enkhbayar, A.: Open Knowledge Maps : creating a visual interface to the world's scientific knowledge based on natural language processing (2016) 0.04
    0.040066015 = product of:
      0.08013203 = sum of:
        0.08013203 = product of:
          0.16026406 = sum of:
            0.16026406 = weight(_text_:discovery in 3205) [ClassicSimilarity], result of:
              0.16026406 = score(doc=3205,freq=6.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.60095066 = fieldWeight in 3205, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3205)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The goal of Open Knowledge Maps is to create a visual interface to the world's scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps.
  3. Dushay, N.: Visualizing bibliographic metadata : a virtual (book) spine viewer (2004) 0.03
    0.032713763 = product of:
      0.06542753 = sum of:
        0.06542753 = product of:
          0.13085505 = sum of:
            0.13085505 = weight(_text_:discovery in 1197) [ClassicSimilarity], result of:
              0.13085505 = score(doc=1197,freq=16.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.49067414 = fieldWeight in 1197, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1197)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    User interfaces for digital information discovery often require users to click around and read a lot of text in order to find the text they want to read-a process that is often frustrating and tedious. This is exacerbated because of the limited amount of text that can be displayed on a computer screen. To improve the user experience of computer mediated information discovery, information visualization techniques are applied to the digital library context, while retaining traditional information organization concepts. In this article, the "virtual (book) spine" and the virtual spine viewer are introduced. The virtual spine viewer is an application which allows users to visually explore large information spaces or collections while also allowing users to hone in on individual resources of interest. The virtual spine viewer introduced here is an alpha prototype, presented to promote discussion and further work. Information discovery changed radically with the introduction of computerized library access catalogs, the World Wide Web and its search engines, and online bookstores. Yet few instances of these technologies provide a user experience analogous to walking among well-organized, well-stocked bookshelves-which many people find useful as well as pleasurable. To put it another way, many of us have heard or voiced complaints about the paucity of "online browsing"-but what does this really mean? In traditional information spaces such as libraries, often we can move freely among the books and other resources. When we walk among organized, labeled bookshelves, we get a sense of the information space-we take in clues, perhaps unconsciously, as to the scope of the collection, the currency of resources, the frequency of their use, etc. We also enjoy unexpected discoveries such as finding an interesting resource because library staff deliberately located it near similar resources, or because it was miss-shelved, or because we saw it on a bookshelf on the way to the water fountain.
    When our experience of information discovery is mediated by a computer, we neither move ourselves nor the monitor. We have only the computer's monitor to view, and the keyboard and/or mouse to manipulate what is displayed there. Computer interfaces often reduce our ability to get a sense of the contents of a library: we don't perceive the scope of the library: its breadth, (the quantity of materials/information), its density (how full the shelves are, how thorough the collection is for individual topics), or the general audience for the materials (e.g., whether the materials are appropriate for middle school students, college professors, etc.). Additionally, many computer interfaces for information discovery require users to scroll through long lists, to click numerous navigational links and to read a lot of text to find the exact text they want to read. Text features of resources are almost always presented alphabetically, and the number of items in these alphabetical lists sometimes can be very long. Alphabetical ordering is certainly an improvement over no ordering, but it generally has no bearing on features with an inherent non-alphabetical ordering (e.g., dates of historical events), nor does it necessarily group similar items together. Alphabetical ordering of resources is analogous to one of the most familiar complaints about dictionaries: sometimes you need to know how to spell a word in order to look up its correct spelling in the dictionary. Some have used technology to replicate the appearance of physical libraries, presenting rooms of bookcases and shelves of book spines in virtual 3D environments. This approach presents a problem, as few book spines can be displayed legibly on a monitor screen. This article examines the role of book spines, call numbers, and other traditional organizational and information discovery concepts, and integrates this knowledge with information visualization techniques to show how computers and monitors can meet or exceed similar information discovery methods. The goal is to tap the unique potentials of current information visualization approaches in order to improve information discovery, offer new services, and most important of all, improve user satisfaction. We need to capitalize on what computers do well while bearing in mind their limitations. The intent is to design GUIs to optimize utility and provide a positive experience for the user.
  4. Seeliger, F.: ¬A tool for systematic visualization of controlled descriptors and their relation to others as a rich context for a discovery system (2015) 0.03
    0.026710678 = product of:
      0.053421356 = sum of:
        0.053421356 = product of:
          0.10684271 = sum of:
            0.10684271 = weight(_text_:discovery in 2547) [ClassicSimilarity], result of:
              0.10684271 = score(doc=2547,freq=6.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.40063378 = fieldWeight in 2547, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2547)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The discovery service (a search engine and service called WILBERT) used at our library at the Technical University of Applied Sciences Wildau (TUAS Wildau) is comprised of more than 8 million items. If we were to record all licensed publications in this tool to a higher level of articles, including their bibliographic records and full texts, we would have a holding estimated at a hundred million documents. A lot of features, such as ranking, autocompletion, multi-faceted classification, refining opportunities reduce the number of hits. However, it is not enough to give intuitive support for a systematic overview of topics related to documents in the library. John Naisbitt once said: "We are drowning in information, but starving for knowledge." This quote is still very true today. Two years ago, we started to develop micro thesauri for MINT topics in order to develop an advanced indexing of the library stock. We use iQvoc as a vocabulary management system to create the thesaurus. It provides an easy-to-use browser interface that builds a SKOS thesaurus in the background. The purpose of this is to integrate the thesauri in WILBERT in order to offer a better subject-related search. This approach especially supports first-year students by giving them the possibility to browse through a hierarchical alignment of a subject, for instance, logistics or computer science, and thereby discover how the terms are related. It also supports the students with an insight into established abbreviations and alternative labels. Students at the TUAS Wildau were involved in the developmental process of the software regarding the interface and functionality of iQvoc. The first steps have been taken and involve the inclusion of 3000 terms in our discovery tool WILBERT.
  5. Lamb, I.; Larson, C.: Shining a light on scientific data : building a data catalog to foster data sharing and reuse (2016) 0.02
    0.023132125 = product of:
      0.04626425 = sum of:
        0.04626425 = product of:
          0.0925285 = sum of:
            0.0925285 = weight(_text_:discovery in 3195) [ClassicSimilarity], result of:
              0.0925285 = score(doc=3195,freq=2.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.34695902 = fieldWeight in 3195, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3195)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The scientific community's growing eagerness to make research data available to the public provides libraries - with our expertise in metadata and discovery - an interesting new opportunity. This paper details the in-house creation of a "data catalog" which describes datasets ranging from population-level studies like the US Census to small, specialized datasets created by researchers at our own institution. Based on Symfony2 and Solr, the data catalog provides a powerful search interface to help researchers locate the data that can help them, and an administrative interface so librarians can add, edit, and manage metadata elements at will. This paper will outline the successes, failures, and total redos that culminated in the current manifestation of our data catalog.
  6. Hoeber, O.: ¬A study of visually linked keywords to support exploratory browsing in academic search (2022) 0.02
    0.023132125 = product of:
      0.04626425 = sum of:
        0.04626425 = product of:
          0.0925285 = sum of:
            0.0925285 = weight(_text_:discovery in 644) [ClassicSimilarity], result of:
              0.0925285 = score(doc=644,freq=2.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.34695902 = fieldWeight in 644, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.046875 = fieldNorm(doc=644)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    While the search interfaces used by common academic digital libraries provide easy access to a wealth of peer-reviewed literature, their interfaces provide little support for exploratory browsing. When faced with a complex search task (such as one that requires knowledge discovery), exploratory browsing is an important first step in an exploratory search process. To more effectively support exploratory browsing, we have designed and implemented a novel academic digital library search interface (KLink Search) with two new features: visually linked keywords and an interactive workspace. To study the potential value of these features, we have conducted a controlled laboratory study with 32 participants, comparing KLink Search to a baseline digital library search interface modeled after that used by IEEE Xplore. Based on subjective opinions, objective performance, and behavioral data, we show the value of adding lightweight visual and interactive features to academic digital library search interfaces to support exploratory browsing.
  7. Mercun, T.; Zumer, M.; Aalberg, T.: Presenting bibliographic families : Designing an FRBR-based prototype using information visualization (2016) 0.02
    0.019276772 = product of:
      0.038553543 = sum of:
        0.038553543 = product of:
          0.07710709 = sum of:
            0.07710709 = weight(_text_:discovery in 2879) [ClassicSimilarity], result of:
              0.07710709 = score(doc=2879,freq=2.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.28913254 = fieldWeight in 2879, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2879)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - Despite the importance of bibliographic information systems for discovering and exploring library resources, some of the core functionality that should be provided to support users in their information seeking process is still missing. Investigating these issues, the purpose of this paper is to design a solution that would fulfil the missing objectives. Design/methodology/approach - Building on the concepts of a work family, functional requirements for bibliographic records (FRBR) and information visualization, the paper proposes a model and user interface design that could support a more efficient and user-friendly presentation and navigation in bibliographic information systems. Findings - The proposed design brings together all versions of a work, related works, and other works by and about the author and shows how the model was implemented into a FrbrVis prototype system using hierarchical visualization layout. Research limitations/implications - Although issues related to discovery and exploration apply to various material types, the research first focused on works of fiction and was also limited by the selected sample of records. Practical implications - The model for presenting and interacting with FRBR-based data can serve as a good starting point for future developments and implementations. Originality/value - With FRBR concepts being gradually integrated into cataloguing rules, formats, and various bibliographic services, one of the important questions that has not really been investigated and studied is how the new type of data would be presented to users in a way that would exploit the true potential of the changes.
  8. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.02
    0.017258795 = product of:
      0.03451759 = sum of:
        0.03451759 = product of:
          0.06903518 = sum of:
            0.06903518 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.06903518 = score(doc=3406,freq=2.0), product of:
                0.17843105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050953664 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 5.2010 16:22:35
  9. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.02
    0.017258795 = product of:
      0.03451759 = sum of:
        0.03451759 = product of:
          0.06903518 = sum of:
            0.06903518 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.06903518 = score(doc=2755,freq=2.0), product of:
                0.17843105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050953664 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    1. 2.2016 18:25:22
  10. Wainer, H.: Picturing the uncertain world : how to understand, communicate, and control uncertainty through graphical display (2009) 0.02
    0.015421417 = product of:
      0.030842833 = sum of:
        0.030842833 = product of:
          0.061685666 = sum of:
            0.061685666 = weight(_text_:discovery in 1451) [ClassicSimilarity], result of:
              0.061685666 = score(doc=1451,freq=2.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.23130602 = fieldWeight in 1451, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1451)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In his entertaining and informative book "Graphic Discovery", Howard Wainer unlocked the power of graphical display to make complex problems clear. Now he's back with Picturing the Uncertain World, a book that explores how graphs can serve as maps to guide us when the information we have is ambiguous or incomplete. Using a visually diverse sampling of graphical display, from heartrending autobiographical displays of genocide in the Kovno ghetto to the 'Pie Chart of Mystery' in a "New Yorker" cartoon, Wainer illustrates the many ways graphs can be used - and misused - as we try to make sense of an uncertain world. "Picturing the Uncertain World" takes readers on an extraordinary graphical adventure, revealing how the visual communication of data offers answers to vexing questions yet also highlights the measure of uncertainty in almost everything we do. Are cancer rates higher or lower in rural communities? How can you know how much money to sock away for retirement when you don't know when you'll die? And where exactly did nineteenth-century novelists get their ideas? These are some of the fascinating questions Wainer invites readers to consider. Along the way he traces the origins and development of graphical display, from William Playfair, who pioneered the use of graphs in the eighteenth century, to instances today where the public has been misled through poorly designed graphs. We live in a world full of uncertainty, yet it is within our grasp to take its measure. Read "Picturing the Uncertain World" and learn how.
  11. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.014644574 = product of:
      0.029289149 = sum of:
        0.029289149 = product of:
          0.058578297 = sum of:
            0.058578297 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.058578297 = score(doc=3355,freq=4.0), product of:
                0.17843105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050953664 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
  12. Zhu, B.; Chen, H.: Information visualization (2004) 0.01
    0.013493739 = product of:
      0.026987478 = sum of:
        0.026987478 = product of:
          0.053974956 = sum of:
            0.053974956 = weight(_text_:discovery in 4276) [ClassicSimilarity], result of:
              0.053974956 = score(doc=4276,freq=2.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.20239276 = fieldWeight in 4276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4276)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
  13. Trunk, D.: Semantische Netze in Informationssystemen : Verbesserung der Suche durch Interaktion und Visualisierung (2005) 0.01
    0.0120811565 = product of:
      0.024162313 = sum of:
        0.024162313 = product of:
          0.048324626 = sum of:
            0.048324626 = weight(_text_:22 in 2500) [ClassicSimilarity], result of:
              0.048324626 = score(doc=2500,freq=2.0), product of:
                0.17843105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050953664 = queryNorm
                0.2708308 = fieldWeight in 2500, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2500)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    30. 1.2007 18:22:41
  14. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.01
    0.010355277 = product of:
      0.020710554 = sum of:
        0.020710554 = product of:
          0.041421108 = sum of:
            0.041421108 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.041421108 = score(doc=1289,freq=2.0), product of:
                0.17843105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050953664 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  15. Thissen, F.: Screen-Design-Handbuch : Effektiv informieren und kommunizieren mit Multimedia (2001) 0.01
    0.010355277 = product of:
      0.020710554 = sum of:
        0.020710554 = product of:
          0.041421108 = sum of:
            0.041421108 = weight(_text_:22 in 1781) [ClassicSimilarity], result of:
              0.041421108 = score(doc=1781,freq=2.0), product of:
                0.17843105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050953664 = queryNorm
                0.23214069 = fieldWeight in 1781, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1781)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2008 14:35:21
  16. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.01
    0.010355277 = product of:
      0.020710554 = sum of:
        0.020710554 = product of:
          0.041421108 = sum of:
            0.041421108 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.041421108 = score(doc=3693,freq=2.0), product of:
                0.17843105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050953664 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2010 19:36:46
  17. Jäger-Dengler-Harles, I.: Informationsvisualisierung und Retrieval im Fokus der Infromationspraxis (2013) 0.01
    0.010355277 = product of:
      0.020710554 = sum of:
        0.020710554 = product of:
          0.041421108 = sum of:
            0.041421108 = weight(_text_:22 in 1709) [ClassicSimilarity], result of:
              0.041421108 = score(doc=1709,freq=2.0), product of:
                0.17843105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050953664 = queryNorm
                0.23214069 = fieldWeight in 1709, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1709)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    4. 2.2015 9:22:39
  18. Zhang, J.; Mostafa, J.; Tripathy, H.: Information retrieval by semantic analysis and visualization of the concept space of D-Lib® magazine (2002) 0.01
    0.009638386 = product of:
      0.019276772 = sum of:
        0.019276772 = product of:
          0.038553543 = sum of:
            0.038553543 = weight(_text_:discovery in 1211) [ClassicSimilarity], result of:
              0.038553543 = score(doc=1211,freq=2.0), product of:
                0.26668423 = queryWeight, product of:
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.050953664 = queryNorm
                0.14456627 = fieldWeight in 1211, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2338576 = idf(docFreq=640, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1211)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Nevertheless, because thesaurus use has shown to improve retrieval, for our method we integrate functions in the search interface that permit users to explore built-in search vocabularies to improve retrieval from digital libraries. Our method automatically generates the terms and their semantic relationships representing relevant topics covered in a digital library. We call these generated terms the "concepts", and the generated terms and their semantic relationships we call the "concept space". Additionally, we used a visualization technique to display the concept space and allow users to interact with this space. The automatically generated term set is considered to be more representative of subject area in a corpus than an "externally" imposed thesaurus, and our method has the potential of saving a significant amount of time and labor for those who have been manually creating thesauri as well. Information visualization is an emerging discipline and developed very quickly in the last decade. With growing volumes of documents and associated complexities, information visualization has become increasingly important. Researchers have found information visualization to be an effective way to use and understand information while minimizing a user's cognitive load. Our work was based on an algorithmic approach of concept discovery and association. Concepts are discovered using an algorithm based on an automated thesaurus generation procedure. Subsequently, similarities among terms are computed using the cosine measure, and the associations among terms are established using a method known as max-min distance clustering. The concept space is then visualized in a spring embedding graph, which roughly shows the semantic relationships among concepts in a 2-D visual representation. The semantic space of the visualization is used as a medium for users to retrieve the desired documents. In the remainder of this article, we present our algorithmic approach of concept generation and clustering, followed by description of the visualization technique and interactive interface. The paper ends with key conclusions and discussions on future work.
  19. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.01
    0.0086293975 = product of:
      0.017258795 = sum of:
        0.017258795 = product of:
          0.03451759 = sum of:
            0.03451759 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
              0.03451759 = score(doc=1397,freq=2.0), product of:
                0.17843105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050953664 = queryNorm
                0.19345059 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2008 14:29:25
  20. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.01
    0.0086293975 = product of:
      0.017258795 = sum of:
        0.017258795 = product of:
          0.03451759 = sum of:
            0.03451759 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
              0.03451759 = score(doc=5272,freq=2.0), product of:
                0.17843105 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050953664 = queryNorm
                0.19345059 = fieldWeight in 5272, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5272)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:11:05

Languages

  • e 20
  • d 5
  • a 1
  • More… Less…

Types