Search (41 results, page 1 of 3)

  • × theme_ss:"Visualisierung"
  1. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.03
    0.033829853 = product of:
      0.067659706 = sum of:
        0.04883048 = weight(_text_:social in 1289) [ClassicSimilarity], result of:
          0.04883048 = score(doc=1289,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.26434162 = fieldWeight in 1289, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046875 = fieldNorm(doc=1289)
        0.018829225 = product of:
          0.03765845 = sum of:
            0.03765845 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.03765845 = score(doc=1289,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    QVIZ will research and create a framework for visualizing and querying archival resources by a time-space interface based on maps and emergent knowledge structures. The framework will also integrate social software, such as wikis, in order to utilize knowledge in existing and new communities of practice. QVIZ will lead to improved information sharing and knowledge creation, easier access to information in a user-adapted context and innovative ways of exploring and visualizing materials over time, between countries and other administrative units. The common European framework for sharing and accessing archival information provided by the QVIZ project will open a considerably larger commercial market based on archival materials as well as a richer understanding of European history.
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  2. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.02
    0.022553235 = product of:
      0.04510647 = sum of:
        0.032553654 = weight(_text_:social in 2659) [ClassicSimilarity], result of:
          0.032553654 = score(doc=2659,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.17622775 = fieldWeight in 2659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.03125 = fieldNorm(doc=2659)
        0.012552816 = product of:
          0.025105633 = sum of:
            0.025105633 = weight(_text_:22 in 2659) [ClassicSimilarity], result of:
              0.025105633 = score(doc=2659,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.15476047 = fieldWeight in 2659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2659)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  3. Zhang, J.; Zhao, Y.: ¬A user term visualization analysis based on a social question and answer log (2013) 0.02
    0.017620182 = product of:
      0.07048073 = sum of:
        0.07048073 = weight(_text_:social in 2715) [ClassicSimilarity], result of:
          0.07048073 = score(doc=2715,freq=6.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.3815443 = fieldWeight in 2715, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2715)
      0.25 = coord(1/4)
    
    Abstract
    The authors of this paper investigate terms of consumers' diabetes based on a log from the Yahoo!Answers social question and answers (Q&A) forum, ascertain characteristics and relationships among terms related to diabetes from the consumers' perspective, and reveal users' diabetes information seeking patterns. In this study, the log analysis method, data coding method, and visualization multiple-dimensional scaling analysis method were used for analysis. The visual analyses were conducted at two levels: terms analysis within a category and category analysis among the categories in the schema. The findings show that the average number of words per question was 128.63, the average number of sentences per question was 8.23, the average number of words per response was 254.83, and the average number of sentences per response was 16.01. There were 12 categories (Cause & Pathophysiology, Sign & Symptom, Diagnosis & Test, Organ & Body Part, Complication & Related Disease, Medication, Treatment, Education & Info Resource, Affect, Social & Culture, Lifestyle, and Nutrient) in the diabetes related schema which emerged from the data coding analysis. The analyses at the two levels show that terms and categories were clustered and patterns were revealed. Future research directions are also included.
  4. Tscherteu, G.; Langreiter, C.: Explorative Netzwerkanalyse im Living Web (2009) 0.02
    0.016276827 = product of:
      0.06510731 = sum of:
        0.06510731 = weight(_text_:social in 4870) [ClassicSimilarity], result of:
          0.06510731 = score(doc=4870,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.3524555 = fieldWeight in 4870, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0625 = fieldNorm(doc=4870)
      0.25 = coord(1/4)
    
    Source
    Social Semantic Web: Web 2.0, was nun? Hrsg.: A. Blumauer u. T. Pellegrini
  5. Moya-Anegón, F. de; Vargas-Quesada, B.; Chinchilla-Rodríguez, Z.; Corera-Álvarez, E.; Munoz-Fernández, F.J.; Herrero-Solana, V.; SCImago Group: Visualizing the marrow of science (2007) 0.01
    0.0142422225 = product of:
      0.05696889 = sum of:
        0.05696889 = weight(_text_:social in 1313) [ClassicSimilarity], result of:
          0.05696889 = score(doc=1313,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.30839854 = fieldWeight in 1313, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1313)
      0.25 = coord(1/4)
    
    Abstract
    This study proposes a new methodology that allows for the generation of scientograms of major scientific domains, constructed on the basis of cocitation of Institute of Scientific Information categories, and pruned using PathfinderNetwork, with a layout determined by algorithms of the spring-embedder type (Kamada-Kawai), then corroborated structurally by factor analysis. We present the complete scientogram of the world for the Year 2002. It integrates the natural sciences, the social sciences, and arts and humanities. Its basic structure and the essential relationships therein are revealed, allowing us to simultaneously analyze the macrostructure, microstructure, and marrow of worldwide scientific output.
  6. Wu, Y.; Bai, R.: ¬An event relationship model for knowledge organization and visualization (2017) 0.01
    0.01220762 = product of:
      0.04883048 = sum of:
        0.04883048 = weight(_text_:social in 3867) [ClassicSimilarity], result of:
          0.04883048 = score(doc=3867,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.26434162 = fieldWeight in 3867, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046875 = fieldNorm(doc=3867)
      0.25 = coord(1/4)
    
    Abstract
    An event is a specific occurrence involving participants, which is a typed, n-ary association of entities or other events, each identified as a participant in a specific semantic role in the event (Pyysalo et al. 2012; Linguistic Data Consortium 2005). Event types may vary across domains. Representing relationships between events can facilitate the understanding of knowledge in complex systems (such as economic systems, human body, social systems). In the simplest form, an event can be represented as Entity A <Relation> Entity B. This paper evaluates several knowledge organization and visualization models and tools, such as concept maps (Cmap), topic maps (Ontopia), network analysis models (Gephi), and ontology (Protégé), then proposes an event relationship model that aims to integrate the strengths of these models, and can represent complex knowledge expressed in events and their relationships.
  7. Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F.: Science mapping software tools : review, analysis, and cooperative study among tools (2011) 0.01
    0.010456172 = product of:
      0.041824687 = sum of:
        0.041824687 = product of:
          0.083649375 = sum of:
            0.083649375 = weight(_text_:aspects in 4486) [ClassicSimilarity], result of:
              0.083649375 = score(doc=4486,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.39950368 = fieldWeight in 4486, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4486)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Science mapping aims to build bibliometric maps that describe how specific disciplines, scientific domains, or research fields are conceptually, intellectually, and socially structured. Different techniques and software tools have been proposed to carry out science mapping analysis. The aim of this article is to review, analyze, and compare some of these software tools, taking into account aspects such as the bibliometric techniques available and the different kinds of analysis.
  8. Denton, W.: On dentographs, a new method of visualizing library collections (2012) 0.01
    0.010456172 = product of:
      0.041824687 = sum of:
        0.041824687 = product of:
          0.083649375 = sum of:
            0.083649375 = weight(_text_:aspects in 580) [ClassicSimilarity], result of:
              0.083649375 = score(doc=580,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.39950368 = fieldWeight in 580, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0625 = fieldNorm(doc=580)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    A dentograph is a visualization of a library's collection built on the idea that a classification scheme is a mathematical function mapping one set of things (books or the universe of knowledge) onto another (a set of numbers and letters). Dentographs can visualize aspects of just one collection or can be used to compare two or more collections. This article describes how to build them, with examples and code using Ruby and R, and discusses some problems and future directions.
  9. Leydesdorff, L.: Visualization of the citation impact environments of scientific journals : an online mapping exercise (2007) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 82) [ClassicSimilarity], result of:
          0.040692065 = score(doc=82,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 82, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=82)
      0.25 = coord(1/4)
    
    Abstract
    Aggregated journal-journal citation networks based on the Journal Citation Reports 2004 of the Science Citation Index (5,968 journals) and the Social Science Citation Index (1,712 journals) are made accessible from the perspective of any of these journals. A vector-space model Is used for normalization, and the results are brought online at http://www.leydesdorff.net/jcr04 as input files for the visualization program Pajek. The user is thus able to analyze the citation environment in terms of links and graphs. Furthermore, the local impact of a journal is defined as its share of the total citations in the specific journal's citation environments; the vertical size of the nodes is varied proportionally to this citation impact. The horizontal size of each node can be used to provide the same information after correction for within-journal (self-)citations. In the "citing" environment, the equivalents of this measure can be considered as a citation activity index which maps how the relevant journal environment is perceived by the collective of authors of a given journal. As a policy application, the mechanism of Interdisciplinary developments among the sciences is elaborated for the case of nanotechnology journals.
  10. Xiaoyue M.; Cahier, J.-P.: Iconic categorization with knowledge-based "icon systems" can improve collaborative KM (2011) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 4837) [ClassicSimilarity], result of:
          0.040692065 = score(doc=4837,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 4837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4837)
      0.25 = coord(1/4)
    
    Abstract
    Icon system could represent an efficient solution for collective iconic categorization of knowledge by providing graphical interpretation. Their pictorial characters assist visualizing the structure of text to become more understandable beyond vocabulary obstacle. In this paper we are proposing a Knowledge Engineering (KM) based iconic representation approach. We assume that these systematic icons improve collective knowledge management. Meanwhile, text (constructed under our knowledge management model - Hypertopic) helps to reduce the diversity of graphical understanding belonging to different users. This "position paper" also prepares to demonstrate our hypothesis by an "iconic social tagging" experiment which is to be accomplished in 2011 with UTT students. We describe the "socio semantic web" information portal involved in this project, and a part of the icons already designed for this experiment in Sustainability field. We have reviewed existing theoretical works on icons from various origins, which can be used to lay the foundation of robust "icons systems".
  11. Lin, F.-T.: Drawing a knowledge map of smart city knowledge in academia (2019) 0.01
    0.010173016 = product of:
      0.040692065 = sum of:
        0.040692065 = weight(_text_:social in 5454) [ClassicSimilarity], result of:
          0.040692065 = score(doc=5454,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.22028469 = fieldWeight in 5454, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5454)
      0.25 = coord(1/4)
    
    Abstract
    This research takes the academic articles in the Web of Science's core collection database as a corpus to draw a series of knowledge maps, to explore the relationships, connectivity, dis-tribution, and evolution among their keywords with respect to smart cities in the last decade. Beyond just drawing a text cloud or measuring their sizes, we further explore their texture by iden-tifying the hottest keywords in academic articles, construct links between and among them that share common keywords, identify islands, rocks, reefs that are formed by connected articles-a metaphor inspired by Ong et al. (2005)-and analyze trends in their evolution. We found the following phenomena: 1) "Internet of Things" is the most frequently mentioned keyword in recent research articles; 2) the numbers of islands and reefs are increas-ing; 3) the evolutions of the numbers of weighted links have frac-tal-like structure; and, 4) the coverage of the largest rock, formed by articles that share a common keyword, in the largest island is converging into around 10% to 20%. These phenomena imply that a common interest in the technology of smart cities has been emerging among researchers. However, the administrative, social, economic, and cultural issues need more attention in academia in the future.
  12. Information visualization : human-centered issues and perspectives (2008) 0.01
    0.009242038 = product of:
      0.036968153 = sum of:
        0.036968153 = product of:
          0.073936306 = sum of:
            0.073936306 = weight(_text_:aspects in 3285) [ClassicSimilarity], result of:
              0.073936306 = score(doc=3285,freq=4.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.35311472 = fieldWeight in 3285, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3285)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This book is the outcome of the Dagstuhl Seminar on "Information Visualization - Human-Centered Issues in Visual Representation, Interaction, and Evaluation" held at Dagstuhl Castle, Germany, from May 28 to June 1, 2007. Information Visualization (InfoVis) is a relatively new research area, which focuses on the use of visualization techniques to help people understand and analyze data.This book documents and extends the findings and discussions of the various sessions in detail. The seven contributions cover the most important topics: Part I is on general reflections on the value of information visualization; evaluating information visualizations; theoretical foundations of information visualization; teaching information visualization. Part II deals with specific aspects on creation and collaboration: engaging new audiences for information visualization; process and pitfalls in writing information visualization research papers; and visual analytics: definition, process, and challenges.
    Content
    Inhalt: Part I. General Reflections The Value of Information Visualization / Jean-Daniel Fekete, Jarke J van Wijk, John T. Stasko, Chris North Evaluating Information Visualizations / Sheelagh Carpendale Theoretical Foundations of Information Visualization / Helen C. Purchase, Natalia Andrienko, T.J. Jankun-Kelly, Matthew Ward Teaching Information Visualization / Andreas Kerren, John T. Stasko, Jason Dykes Part II. Specific Aspects Creation and Collaboration: Engaging New Audiences for Information Visualization / Jeffrey Heer, Frank van Ham, Sheelagh Carpendale, Chris Weaver, Petra Isenberg Process and Pitfalls in Writing Information Visualization Research Papers / Tamara Munzner Visual Analytics: Definition, Process, and Challenges / Daniel Keim, Gennady Andrienko, Jean-Daniel Fekete, Carsten Görg, Jörn Kohlhammer, Guy Melancon
  13. Ekström, B.: Trace data visualisation enquiry : a methodological coupling for studying information practices in relation to information systems (2022) 0.01
    0.009242038 = product of:
      0.036968153 = sum of:
        0.036968153 = product of:
          0.073936306 = sum of:
            0.073936306 = weight(_text_:aspects in 687) [ClassicSimilarity], result of:
              0.073936306 = score(doc=687,freq=4.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.35311472 = fieldWeight in 687, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=687)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to examine whether and how a methodological coupling of visualisations of trace data and interview methods can be utilised for information practices studies. Design/methodology/approach Trace data visualisation enquiry is suggested as the coupling of visualising exported data from an information system and using these visualisations as basis for interview guides and elicitation in information practices research. The methodology is illustrated and applied through a small-scale empirical study of a citizen science project. Findings The study found that trace data visualisation enquiry enabled fine-grained investigations of temporal aspects of information practices and to compare and explore temporal and geographical aspects of practices. Moreover, the methodology made possible inquiries for understanding information practices through trace data that were discussed through elicitation with participants. The study also found that it can aid a researcher of gaining a simultaneous overarching and close picture of information practices, which can lead to theoretical and methodological implications for information practices research. Originality/value Trace data visualisation enquiry extends current methods for investigating information practices as it enables focus to be placed on the traces of practices as recorded through interactions with information systems and study participants' accounts of activities.
  14. Quirin, A.; Cordón, O.; Santamaría, J.; Vargas-Quesada, B.; Moya-Anegón, F.: ¬A new variant of the Pathfinder algorithm to generate large visual science maps in cubic time (2008) 0.01
    0.008138414 = product of:
      0.032553654 = sum of:
        0.032553654 = weight(_text_:social in 2112) [ClassicSimilarity], result of:
          0.032553654 = score(doc=2112,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.17622775 = fieldWeight in 2112, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.03125 = fieldNorm(doc=2112)
      0.25 = coord(1/4)
    
    Abstract
    In the last few years, there is an increasing interest to generate visual representations of very large scientific domains. A methodology based on the combined use of ISI-JCR category cocitation and social networks analysis through the use of the Pathfinder algorithm has demonstrated its ability to achieve high quality, schematic visualizations for these kinds of domains. Now, the next step would be to generate these scientograms in an on-line fashion. To do so, there is a need to significantly decrease the run time of the latter pruning technique when working with category cocitation matrices of a large dimension like the ones handled in these large domains (Pathfinder has a time complexity order of O(n4), with n being the number of categories in the cocitation matrix, i.e., the number of nodes in the network). Although a previous improvement called Binary Pathfinder has already been proposed to speed up the original algorithm, its significant time complexity reduction is not enough for that aim. In this paper, we make use of a different shortest path computation from classical approaches in computer science graph theory to propose a new variant of the Pathfinder algorithm which allows us to reduce its time complexity in one order of magnitude, O(n3), and thus to significantly decrease the run time of the implementation when applied to large scientific domains considering the parameter q = n - 1. Besides, the new algorithm has a much simpler structure than the Binary Pathfinder as well as it saves a significant amount of memory with respect to the original Pathfinder by reducing the space complexity to the need of just storing two matrices. An experimental comparison will be developed using large networks from real-world domains to show the good performance of the new proposal.
  15. Stodola, J.T.: ¬The concept of information and questions of users with visual disabilities : an epistemological approach (2014) 0.01
    0.008138414 = product of:
      0.032553654 = sum of:
        0.032553654 = weight(_text_:social in 1783) [ClassicSimilarity], result of:
          0.032553654 = score(doc=1783,freq=2.0), product of:
            0.1847249 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046325076 = queryNorm
            0.17622775 = fieldWeight in 1783, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.03125 = fieldNorm(doc=1783)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to evaluate the functionality of the particular epistemological schools with regard to the issues of users with visual impairment, to offer a theoretical answer to the question why these issues are not in the center of the interest of information science, and to try to find an epistemological approach that has ambitions to create the theoretical basis for the analysis of the relationship between information and visually impaired users. Design/methodology/approach - The methodological basis of the paper is determined by the selection of the epistemological approach. In order to think about the concept of information and to put it in relation to issues associated with users with visual impairment, a conceptual analysis is applied. Findings - Most of information science theories are based on empiricism and rationalism; this is the reason for their low interest in the questions of visually impaired users. Users with visual disabilities are out of the interest of rationalistic epistemology because it underestimates sensory perception; empiricism is not interested in them paradoxically because it overestimates sensory perception. Realism which fairly reflects such issues is an approach which allows the providing of information to persons with visual disabilities to be dealt with properly. Research limitations/implications - The paper has a speculative character. Its findings should be supported by empirical research in the future. Practical implications - Theoretical questions solved in the paper come from the practice of providing information to visually impaired users. Because practice has an influence on theory and vice versa, the author hopes that the findings included in the paper can serve to improve practice in the field. Social implications - The paper provides theoretical anchoring of the issues which are related to the inclusion of people with disabilities into society and its findings have a potential to support such efforts. Originality/value - This is first study linking questions of users with visual disabilities to highly abstract issues connected to the concept of information.
  16. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.007845511 = product of:
      0.031382043 = sum of:
        0.031382043 = product of:
          0.062764086 = sum of:
            0.062764086 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.062764086 = score(doc=3406,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 5.2010 16:22:35
  17. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.01
    0.007845511 = product of:
      0.031382043 = sum of:
        0.031382043 = product of:
          0.062764086 = sum of:
            0.062764086 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.062764086 = score(doc=2755,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 2.2016 18:25:22
  18. Yi, K.; Chan, L.M.: ¬A visualization software tool for Library of Congress Subject Headings (2008) 0.01
    0.007842129 = product of:
      0.031368516 = sum of:
        0.031368516 = product of:
          0.06273703 = sum of:
            0.06273703 = weight(_text_:aspects in 2503) [ClassicSimilarity], result of:
              0.06273703 = score(doc=2503,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.29962775 = fieldWeight in 2503, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2503)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    The aim of this study is to develop a software tool, VisuaLCSH, for effective searching, browsing, and maintenance of LCSH. This tool enables visualizing subject headings and hierarchical structures implied and embedded in LCSH. A conceptual framework for converting the hierarchical structure of headings in LCSH to an explicit tree structure is proposed, described, and implemented. The highlights of VisuaLCSH are summarized below: 1) revealing multiple aspects of a heading; 2) normalizing the hierarchical relationships in LCSH; 3) showing multi-level hierarchies in LCSH sub-trees; 4) improving the navigational function of LCSH in retrieval; and 5) enabling the implementation of generic search, i.e., the 'exploding' feature, in searching LCSH.
  19. Howarth, L.C.: Mapping the world of knowledge : cartograms and the diffusion of knowledge 0.01
    0.007842129 = product of:
      0.031368516 = sum of:
        0.031368516 = product of:
          0.06273703 = sum of:
            0.06273703 = weight(_text_:aspects in 3550) [ClassicSimilarity], result of:
              0.06273703 = score(doc=3550,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.29962775 = fieldWeight in 3550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3550)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Displaying aspects of "aboutness" by means of non-verbal representations, such as notations, symbols, or icons, or through rich visual displays, such as those of topic maps, can facilitate meaning-making, putting information in context, and situating it relative to other information. As the design of displays of web-enabled information has struggled to keep pace with a bourgeoning body of digital content, increasingly innovative approaches to organizing search results have warranted greater attention. Using Worldmapper as an example, this paper examines cartograms - a derivative of the data map which adds dimensionality to the geographic positioning of information - as one approach to representing and managing subject content, and to tracking the diffusion of knowledge across place and time.
  20. Braun, S.: Manifold: a custom analytics platform to visualize research impact (2015) 0.01
    0.007842129 = product of:
      0.031368516 = sum of:
        0.031368516 = product of:
          0.06273703 = sum of:
            0.06273703 = weight(_text_:aspects in 2906) [ClassicSimilarity], result of:
              0.06273703 = score(doc=2906,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.29962775 = fieldWeight in 2906, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2906)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The use of research impact metrics and analytics has become an integral component to many aspects of institutional assessment. Many platforms currently exist to provide such analytics, both proprietary and open source; however, the functionality of these systems may not always overlap to serve uniquely specific needs. In this paper, I describe a novel web-based platform, named Manifold, that I built to serve custom research impact assessment needs in the University of Minnesota Medical School. Built on a standard LAMP architecture, Manifold automatically pulls publication data for faculty from Scopus through APIs, calculates impact metrics through automated analytics, and dynamically generates report-like profiles that visualize those metrics. Work on this project has resulted in many lessons learned about challenges to sustainability and scalability in developing a system of such magnitude.

Languages

  • e 35
  • d 5
  • a 1
  • More… Less…

Types

  • a 29
  • m 8
  • el 7
  • x 3
  • s 1
  • More… Less…