Search (10 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Visualisierung"
  • × year_i:[2000 TO 2010}
  1. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.02
    0.02455782 = sum of:
      0.009921885 = product of:
        0.049609426 = sum of:
          0.049609426 = weight(_text_:authors in 5272) [ClassicSimilarity], result of:
            0.049609426 = score(doc=5272,freq=2.0), product of:
              0.19698687 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04321011 = queryNorm
              0.25184128 = fieldWeight in 5272, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5272)
        0.2 = coord(1/5)
      0.0146359345 = product of:
        0.029271869 = sum of:
          0.029271869 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
            0.029271869 = score(doc=5272,freq=2.0), product of:
              0.15131445 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04321011 = queryNorm
              0.19345059 = fieldWeight in 5272, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=5272)
        0.5 = coord(1/2)
    
    Abstract
    This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
    Date
    22. 7.2006 16:11:05
  2. Information visualization in data mining and knowledge discovery (2002) 0.01
    0.01146704 = sum of:
      0.005612666 = product of:
        0.028063329 = sum of:
          0.028063329 = weight(_text_:authors in 1789) [ClassicSimilarity], result of:
            0.028063329 = score(doc=1789,freq=4.0), product of:
              0.19698687 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.04321011 = queryNorm
              0.14246294 = fieldWeight in 1789, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
        0.2 = coord(1/5)
      0.005854374 = product of:
        0.011708748 = sum of:
          0.011708748 = weight(_text_:22 in 1789) [ClassicSimilarity], result of:
            0.011708748 = score(doc=1789,freq=2.0), product of:
              0.15131445 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04321011 = queryNorm
              0.07738023 = fieldWeight in 1789, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=1789)
        0.5 = coord(1/2)
    
    Date
    23. 3.2008 19:10:22
    Footnote
    In 13 chapters, Part Two provides an introduction to KDD, an overview of data mining techniques, and examples of the usefulness of data model visualizations. The importance of visualization throughout the KDD process is stressed in many of the chapters. In particular, the need for measures of visualization effectiveness, benchmarking for identifying best practices, and the use of standardized sample data sets is convincingly presented. Many of the important data mining approaches are discussed in this complementary context. Cluster and outlier detection, classification techniques, and rule discovery algorithms are presented as the basic techniques common to the KDD process. The potential effectiveness of using visualization in the data modeling process are illustrated in chapters focused an using visualization for helping users understand the KDD process, ask questions and form hypotheses about their data, and evaluate the accuracy and veracity of their results. The 11 chapters of Part Three provide an overview of the KDD process and successful approaches to integrating KDD, data mining, and visualization in complementary domains. Rhodes (Chapter 21) begins this section with an excellent overview of the relation between the KDD process and data mining techniques. He states that the "primary goals of data mining are to describe the existing data and to predict the behavior or characteristics of future data of the same type" (p. 281). These goals are met by data mining tasks such as classification, regression, clustering, summarization, dependency modeling, and change or deviation detection. Subsequent chapters demonstrate how visualization can aid users in the interactive process of knowledge discovery by graphically representing the results from these iterative tasks. Finally, examples of the usefulness of integrating visualization and data mining tools in the domain of business, imagery and text mining, and massive data sets are provided. This text concludes with a thorough and useful 17-page index and lengthy yet integrating 17-page summary of the academic and industrial backgrounds of the contributing authors. A 16-page set of color inserts provide a better representation of the visualizations discussed, and a URL provided suggests that readers may view all the book's figures in color on-line, although as of this submission date it only provides access to a summary of the book and its contents. The overall contribution of this work is its focus an bridging two distinct areas of research, making it a valuable addition to the Morgan Kaufmann Series in Database Management Systems. The editors of this text have met their main goal of providing the first textbook integrating knowledge discovery, data mining, and visualization. Although it contributes greatly to our under- standing of the development and current state of the field, a major weakness of this text is that there is no concluding chapter to discuss the contributions of the sum of these contributed papers or give direction to possible future areas of research. "Integration of expertise between two different disciplines is a difficult process of communication and reeducation. Integrating data mining and visualization is particularly complex because each of these fields in itself must draw an a wide range of research experience" (p. 300). Although this work contributes to the crossdisciplinary communication needed to advance visualization in KDD, a more formal call for an interdisciplinary research agenda in a concluding chapter would have provided a more satisfying conclusion to a very good introductory text.
    With contributors almost exclusively from the computer science field, the intended audience of this work is heavily slanted towards a computer science perspective. However, it is highly readable and provides introductory material that would be useful to information scientists from a variety of domains. Yet, much interesting work in information visualization from other fields could have been included giving the work more of an interdisciplinary perspective to complement their goals of integrating work in this area. Unfortunately, many of the application chapters are these, shallow, and lack complementary illustrations of visualization techniques or user interfaces used. However, they do provide insight into the many applications being developed in this rapidly expanding field. The authors have successfully put together a highly useful reference text for the data mining and information visualization communities. Those interested in a good introduction and overview of complementary research areas in these fields will be satisfied with this collection of papers. The focus upon integrating data visualization with data mining complements texts in each of these fields, such as Advances in Knowledge Discovery and Data Mining (Fayyad et al., MIT Press) and Readings in Information Visualization: Using Vision to Think (Card et. al., Morgan Kauffman). This unique work is a good starting point for future interaction between researchers in the fields of data visualization and data mining and makes a good accompaniment for a course focused an integrating these areas or to the main reference texts in these fields."
  3. Collins, L.M.; Hussell, J.A.T.; Hettinga, R.K.; Powell, J.E.; Mane, K.K.; Martinez, M.L.B.: Information visualization and large-scale repositories (2007) 0.01
    0.011093005 = product of:
      0.02218601 = sum of:
        0.02218601 = product of:
          0.11093004 = sum of:
            0.11093004 = weight(_text_:authors in 2596) [ClassicSimilarity], result of:
              0.11093004 = score(doc=2596,freq=10.0), product of:
                0.19698687 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04321011 = queryNorm
                0.5631342 = fieldWeight in 2596, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2596)
          0.2 = coord(1/5)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To describe how information visualization can be used in the design of interface tools for large-scale repositories. Design/methodology/approach - One challenge for designers in the context of large-scale repositories is to create interface tools that help users find specific information of interest. In order to be most effective, these tools need to leverage the cognitive characteristics of the target users. At the Los Alamos National Laboratory, the authors' target users are scientists and engineers who can be characterized as higher-order, analytical thinkers. In this paper, the authors describe a visualization tool they have created for making the authors' large-scale digital object repositories more usable for them: SearchGraph, which facilitates data set analysis by displaying search results in the form of a two- or three-dimensional interactive scatter plot. Findings - Using SearchGraph, users can view a condensed, abstract visualization of search results. They can view the same dataset from multiple perspectives by manipulating several display, sort, and filter options. Doing so allows them to see different patterns in the dataset. For example, they can apply a logarithmic transformation in order to create more scatter in a dense cluster of data points or they can apply filters in order to focus on a specific subset of data points. Originality/value - SearchGraph is a creative solution to the problem of how to design interface tools for large-scale repositories. It is particularly appropriate for the authors' target users, who are scientists and engineers. It extends the work of the first two authors on ActiveGraph, a read-write digital library visualization tool.
  4. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.01
    0.008781561 = product of:
      0.017563121 = sum of:
        0.017563121 = product of:
          0.035126243 = sum of:
            0.035126243 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.035126243 = score(doc=1289,freq=2.0), product of:
                0.15131445 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04321011 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  5. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.01
    0.0073179672 = product of:
      0.0146359345 = sum of:
        0.0146359345 = product of:
          0.029271869 = sum of:
            0.029271869 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
              0.029271869 = score(doc=1397,freq=2.0), product of:
                0.15131445 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04321011 = queryNorm
                0.19345059 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 3.2008 14:29:25
  6. Zhang, J.; Nguyen, T.: WebStar: a visualization model for hyperlink structures (2005) 0.01
    0.005953131 = product of:
      0.011906262 = sum of:
        0.011906262 = product of:
          0.05953131 = sum of:
            0.05953131 = weight(_text_:authors in 1056) [ClassicSimilarity], result of:
              0.05953131 = score(doc=1056,freq=2.0), product of:
                0.19698687 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04321011 = queryNorm
                0.30220953 = fieldWeight in 1056, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1056)
          0.2 = coord(1/5)
      0.5 = coord(1/2)
    
    Abstract
    The authors introduce an information visualization model, WebStar, for hyperlink-based information systems. Hyperlinks within a hyperlink-based document can be visualized in a two-dimensional visual space. All links are projected within a display sphere in the visual space. The relationship between a specified central document and its hyperlinked documents is visually presented in the visual space. In addition, users are able to define a group of subjects and to observe relevance between each subject and all hyperlinked documents via movement of that subject around the display sphere center. WebStar allows users to dynamically change an interest center during navigation. A retrieval mechanism is developed to control retrieved results in the visual space. Impact of movement of a subject on the visual document distribution is analyzed. An ambiguity problem caused by projection is discussed. Potential applications of this visualization model in information retrieval are included. Future research directions on the topic are addressed.
  7. Large, J.A.; Beheshti, J.: Interface design, Web portals, and children (2005) 0.01
    0.005953131 = product of:
      0.011906262 = sum of:
        0.011906262 = product of:
          0.05953131 = sum of:
            0.05953131 = weight(_text_:authors in 5547) [ClassicSimilarity], result of:
              0.05953131 = score(doc=5547,freq=2.0), product of:
                0.19698687 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04321011 = queryNorm
                0.30220953 = fieldWeight in 5547, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5547)
          0.2 = coord(1/5)
      0.5 = coord(1/2)
    
    Abstract
    Children seek information in order to complete school projects on a wide variety of topics, as well as to support their various leisure activities. Such information can be found in print documents, but increasingly young people are turning to the Web to meet their information needs. In order to exploit this resource, however, children must be able to search or browse digital information through the intermediation of an interface. In particular, they must use Web-based portals that in most cases have been designed for adult users. Guidelines for interface design are not hard to find, but typically they also postulate adult rather than juvenile users. The authors discuss their own research work that has focused upon what young people themselves have to say about the design of portal interfaces. They conclude that specific interface design guidelines are required for young users rather than simply relying upon general design guidelines, and that in order to formulate such guidelines it is necessary to actively include the young people themselves in this process.
  8. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.01
    0.005854374 = product of:
      0.011708748 = sum of:
        0.011708748 = product of:
          0.023417495 = sum of:
            0.023417495 = weight(_text_:22 in 2659) [ClassicSimilarity], result of:
              0.023417495 = score(doc=2659,freq=2.0), product of:
                0.15131445 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04321011 = queryNorm
                0.15476047 = fieldWeight in 2659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2659)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  9. Leydesdorff, L.: Visualization of the citation impact environments of scientific journals : an online mapping exercise (2007) 0.00
    0.0049609425 = product of:
      0.009921885 = sum of:
        0.009921885 = product of:
          0.049609426 = sum of:
            0.049609426 = weight(_text_:authors in 82) [ClassicSimilarity], result of:
              0.049609426 = score(doc=82,freq=2.0), product of:
                0.19698687 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04321011 = queryNorm
                0.25184128 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=82)
          0.2 = coord(1/5)
      0.5 = coord(1/2)
    
    Abstract
    Aggregated journal-journal citation networks based on the Journal Citation Reports 2004 of the Science Citation Index (5,968 journals) and the Social Science Citation Index (1,712 journals) are made accessible from the perspective of any of these journals. A vector-space model Is used for normalization, and the results are brought online at http://www.leydesdorff.net/jcr04 as input files for the visualization program Pajek. The user is thus able to analyze the citation environment in terms of links and graphs. Furthermore, the local impact of a journal is defined as its share of the total citations in the specific journal's citation environments; the vertical size of the nodes is varied proportionally to this citation impact. The horizontal size of each node can be used to provide the same information after correction for within-journal (self-)citations. In the "citing" environment, the equivalents of this measure can be considered as a citation activity index which maps how the relevant journal environment is perceived by the collective of authors of a given journal. As a policy application, the mechanism of Interdisciplinary developments among the sciences is elaborated for the case of nanotechnology journals.
  10. Chowdhury, S.; Chowdhury, G.G.: Using DDC to create a visual knowledge map as an aid to online information retrieval (2004) 0.00
    0.003968754 = product of:
      0.007937508 = sum of:
        0.007937508 = product of:
          0.039687537 = sum of:
            0.039687537 = weight(_text_:authors in 2643) [ClassicSimilarity], result of:
              0.039687537 = score(doc=2643,freq=2.0), product of:
                0.19698687 = queryWeight, product of:
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.04321011 = queryNorm
                0.20147301 = fieldWeight in 2643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.558814 = idf(docFreq=1258, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2643)
          0.2 = coord(1/5)
      0.5 = coord(1/2)
    
    Content
    1. Introduction Web search engines and digital libraries usually expect the users to use search terms that most accurately represent their information needs. Finding the most appropriate search terms to represent an information need is an age old problem in information retrieval. Keyword or phrase search may produce good search results as long as the search terms or phrase(s) match those used by the authors and have been chosen for indexing by the concerned information retrieval system. Since this does not always happen, a large number of false drops are produced by information retrieval systems. The retrieval results become worse in very large systems that deal with millions of records, such as the Web search engines and digital libraries. Vocabulary control tools are used to improve the performance of text retrieval systems. Thesauri, the most common type of vocabulary control tool used in information retrieval, appeared in the late fifties, designed for use with the emerging post-coordinate indexing systems of that time. They are used to exert terminology control in indexing, and to aid in searching by allowing the searcher to select appropriate search terms. A large volume of literature exists describing the design features, and experiments with the use, of thesauri in various types of information retrieval systems (see for example, Furnas et.al., 1987; Bates, 1986, 1998; Milstead, 1997, and Shiri et al., 2002).

Types