Search (73 results, page 2 of 4)

  • × theme_ss:"Visualisierung"
  • × year_i:[2010 TO 2020}
  1. Darányi, S.; Wittek, P.: Demonstrating conceptual dynamics in an evolving text collection (2013) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 1137) [ClassicSimilarity], result of:
              0.010148063 = score(doc=1137,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 1137, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1137)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Based on real-world user demands, we demonstrate how animated visualization of evolving text corpora displays the underlying dynamics of semantic content. To interpret the results, one needs a dynamic theory of word meaning. We suggest that conceptual dynamics as the interaction between kinds of intellectual and emotional content and language is key for such a theory. We demonstrate our method by two-way seriation, which is a popular technique to analyze groups of similar instances and their features as well as the connections between the groups themselves. The two-way seriated data may be visualized as a two-dimensional heat map or as a three-dimensional landscape in which color codes or height correspond to the values in the matrix. In this article, we focus on two-way seriation of sparse data in the Reuters-21568 test collection. To achieve a meaningful visualization, we introduce a compactly supported convolution kernel similar to filter kernels used in image reconstruction and geostatistics. This filter populates the high-dimensional sparse space with values that interpolate nearby elements and provides insight into the clustering structure. We also extend two-way seriation to deal with online updates of both the row and column spaces and, combined with the convolution kernel, demonstrate a three-dimensional visualization of dynamics.
    Type
    a
  2. Mercun, T.; Zumer, M.; Aalberg, T.: Presenting bibliographic families : Designing an FRBR-based prototype using information visualization (2016) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 2879) [ClassicSimilarity], result of:
              0.010148063 = score(doc=2879,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 2879, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2879)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - Despite the importance of bibliographic information systems for discovering and exploring library resources, some of the core functionality that should be provided to support users in their information seeking process is still missing. Investigating these issues, the purpose of this paper is to design a solution that would fulfil the missing objectives. Design/methodology/approach - Building on the concepts of a work family, functional requirements for bibliographic records (FRBR) and information visualization, the paper proposes a model and user interface design that could support a more efficient and user-friendly presentation and navigation in bibliographic information systems. Findings - The proposed design brings together all versions of a work, related works, and other works by and about the author and shows how the model was implemented into a FrbrVis prototype system using hierarchical visualization layout. Research limitations/implications - Although issues related to discovery and exploration apply to various material types, the research first focused on works of fiction and was also limited by the selected sample of records. Practical implications - The model for presenting and interacting with FRBR-based data can serve as a good starting point for future developments and implementations. Originality/value - With FRBR concepts being gradually integrated into cataloguing rules, formats, and various bibliographic services, one of the important questions that has not really been investigated and studied is how the new type of data would be presented to users in a way that would exploit the true potential of the changes.
    Type
    a
  3. Wen, B.; Horlings, E.; Zouwen, M. van der; Besselaar, P. van den: Mapping science through bibliometric triangulation : an experimental approach applied to water research (2017) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 3437) [ClassicSimilarity], result of:
              0.010148063 = score(doc=3437,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 3437, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3437)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The idea of constructing science maps based on bibliographic data has intrigued researchers for decades, and various techniques have been developed to map the structure of research disciplines. Most science mapping studies use a single method. However, as research fields have various properties, a valid map of a field should actually be composed of a set of maps derived from a series of investigations using different methods. That leads to the question of what can be learned from a combination-triangulation-of these different science maps. In this paper we propose a method for triangulation, using the example of water science. We combine three different mapping approaches: journal-journal citation relations (JJCR), shared author keywords (SAK), and title word-cited reference co-occurrence (TWRC). Our results demonstrate that triangulation of JJCR, SAK, and TWRC produces a more comprehensive picture than each method applied individually. The outcomes from the three different approaches can be associated with each other and systematically interpreted to provide insights into the complex multidisciplinary structure of the field of water research.
    Type
    a
  4. Minkov, E.; Kahanov, K.; Kuflik, T.: Graph-based recommendation integrating rating history and domain knowledge : application to on-site guidance of museum visitors (2017) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 3756) [ClassicSimilarity], result of:
              0.010148063 = score(doc=3756,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 3756, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3756)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Visitors to museums and other cultural heritage sites encounter a wealth of exhibits in a variety of subject areas, but can explore only a small number of them. Moreover, there typically exists rich complementary information that can be delivered to the visitor about exhibits of interest, but only a fraction of this information can be consumed during the limited time of the visit. Recommender systems may help visitors to cope with this information overload. Ideally, the recommender system of choice should model user preferences, as well as background knowledge about the museum's environment, considering aspects of physical and thematic relevancy. We propose a personalized graph-based recommender framework, representing rating history and background multi-facet information jointly as a relational graph. A random walk measure is applied to rank available complementary multimedia presentations by their relevancy to a visitor's profile, integrating the various dimensions. We report the results of experiments conducted using authentic data collected at the Hecht museum. An evaluation of multiple graph variants, compared with several popular and state-of-the-art recommendation methods, indicates on advantages of the graph-based approach.
    Type
    a
  5. Wu, Y.; Bai, R.: ¬An event relationship model for knowledge organization and visualization (2017) 0.00
    0.0024857575 = product of:
      0.004971515 = sum of:
        0.004971515 = product of:
          0.00994303 = sum of:
            0.00994303 = weight(_text_:a in 3867) [ClassicSimilarity], result of:
              0.00994303 = score(doc=3867,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18723148 = fieldWeight in 3867, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3867)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    An event is a specific occurrence involving participants, which is a typed, n-ary association of entities or other events, each identified as a participant in a specific semantic role in the event (Pyysalo et al. 2012; Linguistic Data Consortium 2005). Event types may vary across domains. Representing relationships between events can facilitate the understanding of knowledge in complex systems (such as economic systems, human body, social systems). In the simplest form, an event can be represented as Entity A <Relation> Entity B. This paper evaluates several knowledge organization and visualization models and tools, such as concept maps (Cmap), topic maps (Ontopia), network analysis models (Gephi), and ontology (Protégé), then proposes an event relationship model that aims to integrate the strengths of these models, and can represent complex knowledge expressed in events and their relationships.
    Type
    a
  6. Aletras, N.; Baldwin, T.; Lau, J.H.; Stevenson, M.: Evaluating topic representations for exploring document collections (2017) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 3325) [ClassicSimilarity], result of:
              0.009567685 = score(doc=3325,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 3325, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3325)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Topic models have been shown to be a useful way of representing the content of large document collections, for example, via visualization interfaces (topic browsers). These systems enable users to explore collections by way of latent topics. A standard way to represent a topic is using a term list; that is the top-n words with highest conditional probability within the topic. Other topic representations such as textual and image labels also have been proposed. However, there has been no comparison of these alternative representations. In this article, we compare 3 different topic representations in a document retrieval task. Participants were asked to retrieve relevant documents based on predefined queries within a fixed time limit, presenting topics in one of the following modalities: (a) lists of terms, (b) textual phrase labels, and (c) image labels. Results show that textual labels are easier for users to interpret than are term lists and image labels. Moreover, the precision of retrieved documents for textual and image labels is comparable to the precision achieved by representing topics using term lists, demonstrating that labeling methods are an effective alternative topic representation.
    Type
    a
  7. Lin, F.-T.: Drawing a knowledge map of smart city knowledge in academia (2019) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 5454) [ClassicSimilarity], result of:
              0.009567685 = score(doc=5454,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 5454, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5454)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This research takes the academic articles in the Web of Science's core collection database as a corpus to draw a series of knowledge maps, to explore the relationships, connectivity, dis-tribution, and evolution among their keywords with respect to smart cities in the last decade. Beyond just drawing a text cloud or measuring their sizes, we further explore their texture by iden-tifying the hottest keywords in academic articles, construct links between and among them that share common keywords, identify islands, rocks, reefs that are formed by connected articles-a metaphor inspired by Ong et al. (2005)-and analyze trends in their evolution. We found the following phenomena: 1) "Internet of Things" is the most frequently mentioned keyword in recent research articles; 2) the numbers of islands and reefs are increas-ing; 3) the evolutions of the numbers of weighted links have frac-tal-like structure; and, 4) the coverage of the largest rock, formed by articles that share a common keyword, in the largest island is converging into around 10% to 20%. These phenomena imply that a common interest in the technology of smart cities has been emerging among researchers. However, the administrative, social, economic, and cultural issues need more attention in academia in the future.
    Type
    a
  8. Wilson, M.: Interfaces for information retrieval (2011) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 549) [ClassicSimilarity], result of:
              0.009471525 = score(doc=549,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 549, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=549)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  9. Soylu, A.; Giese, M.; Jimenez-Ruiz, E.; Kharlamov, E.; Zheleznyakov, D.; Horrocks, I.: Towards exploiting query history for adaptive ontology-based visual query formulation (2014) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 1576) [ClassicSimilarity], result of:
              0.009076704 = score(doc=1576,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 1576, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1576)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Grounded on real industrial use cases, we recently proposed an ontology-based visual query system for SPARQL, named OptiqueVQS. Ontology-based visual query systems employ ontologies and visual representations to depict the domain of interest and queries, and are promising to enable end users without any technical background to access data on their own. However, even with considerably small ontologies, the number of ontology elements to choose from increases drastically, and hence hinders usability. Therefore, in this paper, we propose a method using the log of past queries for ranking and suggesting query extensions as a user types a query, and identify emerging issues to be addressed.
    Type
    a
  10. Albertson, D.: Visual information seeking (2015) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 1847) [ClassicSimilarity], result of:
              0.009076704 = score(doc=1847,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 1847, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1847)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The present study reports on the information seeking processes in a visual context, referred to throughout as visual information seeking. This study synthesizes research throughout different, yet complementary, areas, each capable of contributing findings and understanding to visual information seeking. Methods previously applied for examining the visual information seeking process are reviewed, including interactive experiments, surveys, and various qualitative approaches. The methods and resulting findings are presented and structured according to generalized phases of existing information seeking models, which include the needs, actions, and assessments of users. A review of visual information needs focuses on need and thus query formulation; user actions, as reviewed, centers on search and browse behaviors and the observed trends, concluded by a survey of users' assessments of visual information as part of the interactive process. This separate examination, specific to a visual context, is significant; visual information can influence outcomes in an interactive process and presents variations in the types of needs, tasks, considerations, and decisions of users, as compared to information seeking in other contexts.
    Type
    a
  11. Braun, S.: Manifold: a custom analytics platform to visualize research impact (2015) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 2906) [ClassicSimilarity], result of:
              0.009076704 = score(doc=2906,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 2906, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2906)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The use of research impact metrics and analytics has become an integral component to many aspects of institutional assessment. Many platforms currently exist to provide such analytics, both proprietary and open source; however, the functionality of these systems may not always overlap to serve uniquely specific needs. In this paper, I describe a novel web-based platform, named Manifold, that I built to serve custom research impact assessment needs in the University of Minnesota Medical School. Built on a standard LAMP architecture, Manifold automatically pulls publication data for faculty from Scopus through APIs, calculates impact metrics through automated analytics, and dynamically generates report-like profiles that visualize those metrics. Work on this project has resulted in many lessons learned about challenges to sustainability and scalability in developing a system of such magnitude.
    Type
    a
  12. Lamb, I.; Larson, C.: Shining a light on scientific data : building a data catalog to foster data sharing and reuse (2016) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 3195) [ClassicSimilarity], result of:
              0.009076704 = score(doc=3195,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 3195, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3195)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The scientific community's growing eagerness to make research data available to the public provides libraries - with our expertise in metadata and discovery - an interesting new opportunity. This paper details the in-house creation of a "data catalog" which describes datasets ranging from population-level studies like the US Census to small, specialized datasets created by researchers at our own institution. Based on Symfony2 and Solr, the data catalog provides a powerful search interface to help researchers locate the data that can help them, and an administrative interface so librarians can add, edit, and manage metadata elements at will. This paper will outline the successes, failures, and total redos that culminated in the current manifestation of our data catalog.
    Type
    a
  13. Wu, I.-C.; Vakkari, P.: Supporting navigation in Wikipedia by information visualization : extended evaluation measures (2014) 0.00
    0.0021393995 = product of:
      0.004278799 = sum of:
        0.004278799 = product of:
          0.008557598 = sum of:
            0.008557598 = weight(_text_:a in 1797) [ClassicSimilarity], result of:
              0.008557598 = score(doc=1797,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.16114321 = fieldWeight in 1797, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1797)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The authors introduce two semantics-based navigation applications that facilitate information-seeking activities in internal link-based web sites in Wikipedia. These applications aim to help users find concepts within a topic and related articles on a given topic quickly and then gain topical knowledge from internal link-based encyclopedia web sites. The paper aims to discuss these issues. Design/methodology/approach - The WNavis application consists of three information visualization (IV) tools which are a topic network, a hierarchy topic tree and summaries for topics. The WikiMap application consists of a topic network. The goal of the topic network and topic tree tools is to help users to find the major concepts of a topic and identify relationships between these major concepts easily. In addition, in order to locate specific information and enable users to explore and read topic-related articles quickly, the topic tree and summaries for topics tools support users to gain topical knowledge quickly. The authors then apply the k-clique of cohesive indicator to analyze the sub topics of the seed query and find out the best clustering results via the cosine measure. The authors utilize four metrics, which are correctness, time cost, usage behaviors, and satisfaction, to evaluate the three interfaces. These metrics measure both the outputs and outcomes of applications. As a baseline system for evaluation the authors used a traditional Wikipedia interface. For the evaluation, the authors used an experimental user study with 30 participants.
    Findings - The results indicate that both WikiMap and WNavis supported users to identify concepts and their relations better compared to the baseline. In topical tasks WNavis over performed both WikiMap and the baseline system. Although there were no time differences in finding concepts or answering topical questions, the test systems provided users with a greater gain per time unit. The users of WNavis leaned on the hierarchy tree instead of other tools, whereas WikiMap users used the topic map. Research limitations/implications - The findings have implications for the design of IR support tools in knowledge-intensive web sites that help users to explore topics and concepts. Originality/value - The authors explored to what extent the use of each IV support tool contributed to successful exploration of topics in search tasks. The authors propose extended task-based evaluation measures to understand how each application provides useful context for users to accomplish the tasks and attain the search goals. That is, the authors not only evaluate the output of the search results, e.g. the number of relevant items retrieved, but also the outcome provided by the system for assisting users to attain the search goal.
    Type
    a
  14. Heuvel, C. van den; Salah, A.A.; Knowledge Space Lab: Visualizing universes of knowledge : design and visual analysis of the UDC (2011) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 4831) [ClassicSimilarity], result of:
              0.008285859 = score(doc=4831,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 4831, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4831)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In the 1950s, the "universe of knowledge" metaphor returned in discussions around the "first theory of faceted classification'; the Colon Classification (CC) of S.R. Ranganathan, to stress the differences within an "universe of concepts" system. Here we claim that the Universal Decimal Classification (UDC) has been either ignored or incorrectly represented in studies that focused on the pivotal role of Ranganathan in a transition from "top-down universe of concepts systems" to "bottom-up universe of concepts systems." Early 20th century designs from Paul Otlet reveal a two directional interaction between "elements" and "ensembles"that can be compared to the relations between the universe of knowledge and universe of concepts systems. Moreover, an unpublished manuscript with the title "Theorie schematique de la Classification" of 1908 includes sketches that demonstrate an exploration by Paul Otlet of the multidimensional characteristics of the UDC. The interactions between these one- and multidimensional representations of the UDC support Donker Duyvis' critical comments to Ranganathan who had dismissed it as a rigid hierarchical system in comparison to his own Colon Classification. A visualization of the experiments of the Knowledge Space Lab in which main categories of Wikipedia were mapped on the UDC provides empirical evidence of its faceted structure's flexibility.
    Source
    Classification and ontology: formal approaches and access to knowledge: proceedings of the International UDC Seminar, 19-20 September 2011, The Hague, The Netherlands. Eds.: A. Slavic u. E. Civallero
    Type
    a
  15. Sahib, N.G.; Tombros, A.; Stockman, T.: Evaluating a search interface for visually impaired searchers (2015) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 2255) [ClassicSimilarity], result of:
              0.008285859 = score(doc=2255,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 2255, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2255)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Understanding the information-seeking behavior of visually impaired users is essential to designing search interfaces that support them during their search tasks. In a previous article, we reported the information-seeking behavior of visually impaired users when performing complex search tasks on the web, and we examined the difficulties encountered when interacting with search interfaces via speech-based screen readers. In this article, we use our previous findings to inform the design of a search interface to support visually impaired users for complex information seeking. We particularly focus on implementing TrailNote, a tool to support visually impaired searchers in managing the search process, and we also redesign the spelling-support mechanism using nonspeech sounds to address previously observed difficulties in interacting with this feature. To enhance the user experience, we have designed interface features to be technically accessible as well as usable with speech-based screen readers. We have evaluated the proposed interface with 12 visually impaired users and studied how they interacted with the interface components. Our findings show that the search interface was effective in supporting participants for complex information seeking and that the proposed interface features were accessible and usable with speech-based screen readers.
    Type
    a
  16. Zhang, J.; Zhao, Y.: ¬A user term visualization analysis based on a social question and answer log (2013) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 2715) [ClassicSimilarity], result of:
              0.008285859 = score(doc=2715,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 2715, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2715)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The authors of this paper investigate terms of consumers' diabetes based on a log from the Yahoo!Answers social question and answers (Q&A) forum, ascertain characteristics and relationships among terms related to diabetes from the consumers' perspective, and reveal users' diabetes information seeking patterns. In this study, the log analysis method, data coding method, and visualization multiple-dimensional scaling analysis method were used for analysis. The visual analyses were conducted at two levels: terms analysis within a category and category analysis among the categories in the schema. The findings show that the average number of words per question was 128.63, the average number of sentences per question was 8.23, the average number of words per response was 254.83, and the average number of sentences per response was 16.01. There were 12 categories (Cause & Pathophysiology, Sign & Symptom, Diagnosis & Test, Organ & Body Part, Complication & Related Disease, Medication, Treatment, Education & Info Resource, Affect, Social & Culture, Lifestyle, and Nutrient) in the diabetes related schema which emerged from the data coding analysis. The analyses at the two levels show that terms and categories were clustered and patterns were revealed. Future research directions are also included.
    Type
    a
  17. Oh, K.E.; Halpern, D.; Tremaine, M.; Chiang, J.; Silver, D.; Bemis, K.: Blocked: when the information is hidden by the visualization (2016) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 2888) [ClassicSimilarity], result of:
              0.008285859 = score(doc=2888,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 2888, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2888)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study investigated how people comprehend three-dimensional (3D) visualizations and what properties of such visualizations affect comprehension. Participants were asked to draw the face of a 3D visualization after it was cut in half. We videotaped the participants as they drew, erased, verbalized their thoughts, gestured, and moved about a two-dimensional paper presentation of the 3D visualization. The videorecords were analyzed using a grounded theory approach to generate hypotheses related to comprehension difficulties and visualization properties. Our analysis of the results uncovered three properties that made problem solving more difficult for participants. These were: (a) cuts that were at an angle in relation to at least one plane of reference, (b) nonplanar properties of the features contained in the 3D visualizations including curved layers and v-shaped layers, and (c) mixed combinations of layers. In contrast, (a) cutting planes that were perpendicular or parallel to the 3D visualization diagram's planes of reference, (b) internal features that were flat/planar, and (c) homogeneous layers were easier to comprehend. This research has direct implications for the generation and use of 3D information visualizations in that it suggests design features to include and avoid.
    Type
    a
  18. Choi, I.: Visualizations of cross-cultural bibliographic classification : comparative studies of the Korean Decimal Classification and the Dewey Decimal Classification (2017) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 3869) [ClassicSimilarity], result of:
              0.008285859 = score(doc=3869,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 3869, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3869)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The changes in KO systems induced by sociocultural influences may include those in both classificatory principles and cultural features. The proposed study will examine the Korean Decimal Classification (KDC)'s adaptation of the Dewey Decimal Classification (DDC) by comparing the two systems. This case manifests the sociocultural influences on KOSs in a cross-cultural context. Therefore, the study aims at an in-depth investigation of sociocultural influences by situating a KOS in a cross-cultural environment and examining the dynamics between two classification systems designed to organize information resources in two distinct sociocultural contexts. As a preceding stage of the comparison, the analysis was conducted on the changes that result from the meeting of different sociocultural feature in a descriptive method. The analysis aims to identify variations between the two schemes in comparison of the knowledge structures of the two classifications, in terms of the quantity of class numbers that represent concepts and their relationships in each of the individual main classes. The most effective analytic strategy to show the patterns of the comparison was visualizations of similarities and differences between the two systems. Increasing or decreasing tendencies in the class through various editions were analyzed. Comparing the compositions of the main classes and distributions of concepts in the KDC and DDC discloses the differences in their knowledge structures empirically. This phase of quantitative analysis and visualizing techniques generates empirical evidence leading to interpretation.
    Type
    a
  19. Hook, P.A.; Gantchev, A.: Using combined metadata sources to visualize a small library (OBL's English Language Books) (2017) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 3870) [ClassicSimilarity], result of:
              0.008285859 = score(doc=3870,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 3870, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3870)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Data from multiple knowledge organization systems are combined to provide a global overview of the content holdings of a small personal library. Subject headings and classification data are used to effectively map the combined book and topic space of the library. While harvested and manipulated by hand, the work reveals issues and potential solutions when using automated techniques to produce topic maps of much larger libraries. The small library visualized consists of the thirty-nine, digital, English language books found in the Osama Bin Laden (OBL) compound in Abbottabad, Pakistan upon his death. As this list of books has garnered considerable media attention, it is worth providing a visual overview of the subject content of these books - some of which is not readily apparent from the titles. Metadata from subject headings and classification numbers was combined to create book-subject maps. Tree maps of the classification data were also produced. The books contain 328 subject headings. In order to enhance the base map with meaningful thematic overlay, library holding count data was also harvested (and aggregated from duplicates). This additional data revealed the relative scarcity or popularity of individual books.
    Type
    a
  20. Hiniker, A.; Hong, S.R.; Kim, Y.-S.; Chen, N.-C.; West, J.D.; Aragon, C.: Toward the operationalization of visual metaphor (2017) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 3917) [ClassicSimilarity], result of:
              0.008285859 = score(doc=3917,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 3917, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3917)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many successful digital interfaces employ visual metaphors to convey features or data properties to users, but the characteristics that make a visual metaphor effective are not well understood. We used a theoretical conception of metaphor from cognitive linguistics to design an interactive system for viewing the citation network of the corpora of literature in the JSTOR database, a highly connected compound graph of 2 million papers linked by 8 million citations. We created 4 variants of this system, manipulating 2 distinct properties of metaphor. We conducted a between-subjects experimental study with 80 participants to compare understanding and engagement when working with each version. We found that building on known image schemas improved response time on look-up tasks, while contextual detail predicted increases in persistence and the number of inferences drawn from the data. Schema-congruency combined with contextual detail produced the highest gains in comprehension. These findings provide concrete mechanisms by which designers presenting large data sets through metaphorical interfaces may improve their effectiveness and appeal with users.
    Type
    a

Languages

  • e 64
  • d 8
  • a 1
  • More… Less…

Types

  • a 69
  • el 16
  • m 2
  • r 1
  • s 1
  • x 1
  • More… Less…