Search (64 results, page 1 of 4)

  • × type_ss:"a"
  • × theme_ss:"Visualisierung"
  1. Clavier, V.: Knowledge organization, data and algorithns : the new era of visual representation (2019) 0.05
    0.050731372 = product of:
      0.101462744 = sum of:
        0.07602732 = weight(_text_:data in 5634) [ClassicSimilarity], result of:
          0.07602732 = score(doc=5634,freq=12.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.513453 = fieldWeight in 5634, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=5634)
        0.025435425 = product of:
          0.05087085 = sum of:
            0.05087085 = weight(_text_:processing in 5634) [ClassicSimilarity], result of:
              0.05087085 = score(doc=5634,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.26835677 = fieldWeight in 5634, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5634)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article shows how visual representations have progressively taken the lead over classical language-based models of knowledge organization (KO). The paper adopts a theoretical and historical perspective and focuses on the consequences of the changes in the volume of data generated by data production on the KO models. Until now, data visualization tools have been used mainly by researchers with expertise in textual data processing or in computational linguistics. But now, these tools are accessible to a greater number of users. Thus, there are new issues at stake for KO, other professions and institutions for gathering data that contribute to defining new standards and KO representations.
  2. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.04
    0.04172619 = product of:
      0.08345238 = sum of:
        0.05173004 = weight(_text_:data in 2755) [ClassicSimilarity], result of:
          0.05173004 = score(doc=2755,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34936053 = fieldWeight in 2755, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.078125 = fieldNorm(doc=2755)
        0.03172234 = product of:
          0.06344468 = sum of:
            0.06344468 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.06344468 = score(doc=2755,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    1. 2.2016 18:25:22
    Source
    Semantic keyword-based search on structured data sources: First COST Action IC1302 International KEYSTONE Conference, IKC 2015, Coimbra, Portugal, September 8-9, 2015. Revised Selected Papers. Eds.: J. Cardoso et al
  3. Huang, S.-C.; Bias, R.G.; Schnyer, D.: How are icons processed by the brain? : Neuroimaging measures of four types of visual stimuli used in information systems (2015) 0.04
    0.04085299 = product of:
      0.08170598 = sum of:
        0.05173004 = weight(_text_:data in 1725) [ClassicSimilarity], result of:
          0.05173004 = score(doc=1725,freq=8.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.34936053 = fieldWeight in 1725, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1725)
        0.029975938 = product of:
          0.059951875 = sum of:
            0.059951875 = weight(_text_:processing in 1725) [ClassicSimilarity], result of:
              0.059951875 = score(doc=1725,freq=4.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3162615 = fieldWeight in 1725, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1725)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    We sought to understand how users interpret meanings of symbols commonly used in information systems, especially how icons are processed by the brain. We investigated Chinese and English speakers' processing of 4 types of visual stimuli: icons, pictures, Chinese characters, and English words. The goal was to examine, via functional magnetic resonance imaging (fMRI) data, the hypothesis that people cognitively process icons as logographic words and to provide neurological evidence related to human-computer interaction (HCI), which has been rare in traditional information system studies. According to the neuroimaging data of 19 participants, we conclude that icons are not cognitively processed as logographical words like Chinese characters, although they both stimulate the semantic system in the brain that is needed for language processing. Instead, more similar to images and pictures, icons are not as efficient as words in conveying meanings, and brains (people) make more effort to process icons than words. We use this study to demonstrate that it is practicable to test information system constructs such as elements of graphical user interfaces (GUIs) with neuroscience data and that, with such data, we can better understand individual or group differences related to system usage and user-computer interactions.
  4. Kraker, P.; Kittel, C,; Enkhbayar, A.: Open Knowledge Maps : creating a visual interface to the world's scientific knowledge based on natural language processing (2016) 0.03
    0.033504575 = product of:
      0.06700915 = sum of:
        0.031038022 = weight(_text_:data in 3205) [ClassicSimilarity], result of:
          0.031038022 = score(doc=3205,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.2096163 = fieldWeight in 3205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
        0.035971127 = product of:
          0.071942255 = sum of:
            0.071942255 = weight(_text_:processing in 3205) [ClassicSimilarity], result of:
              0.071942255 = score(doc=3205,freq=4.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3795138 = fieldWeight in 3205, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3205)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The goal of Open Knowledge Maps is to create a visual interface to the world's scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps.
    Theme
    Data Mining
  5. Zhang, J.; Zhao, Y.: ¬A user term visualization analysis based on a social question and answer log (2013) 0.03
    0.028887425 = product of:
      0.05777485 = sum of:
        0.03657866 = weight(_text_:data in 2715) [ClassicSimilarity], result of:
          0.03657866 = score(doc=2715,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.24703519 = fieldWeight in 2715, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2715)
        0.021196188 = product of:
          0.042392377 = sum of:
            0.042392377 = weight(_text_:processing in 2715) [ClassicSimilarity], result of:
              0.042392377 = score(doc=2715,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.22363065 = fieldWeight in 2715, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2715)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The authors of this paper investigate terms of consumers' diabetes based on a log from the Yahoo!Answers social question and answers (Q&A) forum, ascertain characteristics and relationships among terms related to diabetes from the consumers' perspective, and reveal users' diabetes information seeking patterns. In this study, the log analysis method, data coding method, and visualization multiple-dimensional scaling analysis method were used for analysis. The visual analyses were conducted at two levels: terms analysis within a category and category analysis among the categories in the schema. The findings show that the average number of words per question was 128.63, the average number of sentences per question was 8.23, the average number of words per response was 254.83, and the average number of sentences per response was 16.01. There were 12 categories (Cause & Pathophysiology, Sign & Symptom, Diagnosis & Test, Organ & Body Part, Complication & Related Disease, Medication, Treatment, Education & Info Resource, Affect, Social & Culture, Lifestyle, and Nutrient) in the diabetes related schema which emerged from the data coding analysis. The analyses at the two levels show that terms and categories were clustered and patterns were revealed. Future research directions are also included.
    Source
    Information processing and management. 49(2013) no.5, S.1019-1048
  6. Hajdu Barat, A.: Human perception and knowledge organization : visual imagery (2007) 0.03
    0.027920479 = product of:
      0.055840958 = sum of:
        0.02586502 = weight(_text_:data in 2595) [ClassicSimilarity], result of:
          0.02586502 = score(doc=2595,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 2595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2595)
        0.029975938 = product of:
          0.059951875 = sum of:
            0.059951875 = weight(_text_:processing in 2595) [ClassicSimilarity], result of:
              0.059951875 = score(doc=2595,freq=4.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.3162615 = fieldWeight in 2595, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2595)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - This paper aims to explore the theory and practice of knowledge organization and its necessary connection to human perception, and shows a solution of the potential ones. Design/methodology/approach - The author attempts to survey the problem of concept-building and extension, as well as the determination of semantics in different aspects. The purpose is to find criteria for the choice of the solution that best incorporates users into the design cycles of knowledge organization systems. Findings - It is widely agreed that cognition provides the basis for concept-building; however, at the next stage of processing there is a debate. Fundamentally, what is the connection between perception and the superior cognitive processes? The perceptual method does not separate these two but rather considers them united, with perception permeating cognition. By contrast, the linguistic method considers perception as an information-receiving system. Separate from, and following, perception, the cognitive subsystems then perform information and data processing, leading to both knowledge organization and representation. We assume by that model that top-level concepts emerge from knowledge organization and representation. This paper points obvious connection of visual imagery and the internet; perceptual access of knowledge organization and information retrieval. There are some practical and characteristic solutions for the visualization of information without demand of completeness. Research limitations/implications - Librarians need to identify those semantic characteristics which stimulate a similar conceptual image both in the mind of the librarian and in the mind of the user. Originality/value - For a fresh perspective, an understanding of perception is required as well.
  7. Di Maio, P.: Linked data beyond libraries : towards universal interfaces and knowledge unification (2015) 0.02
    0.021947198 = product of:
      0.08778879 = sum of:
        0.08778879 = weight(_text_:data in 2553) [ClassicSimilarity], result of:
          0.08778879 = score(doc=2553,freq=4.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5928845 = fieldWeight in 2553, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=2553)
      0.25 = coord(1/4)
    
    Source
    Linked data and user interaction: the road ahead. Eds.: Cervone, H.F. u. L.G. Svensson
  8. Lamb, I.; Larson, C.: Shining a light on scientific data : building a data catalog to foster data sharing and reuse (2016) 0.02
    0.021947198 = product of:
      0.08778879 = sum of:
        0.08778879 = weight(_text_:data in 3195) [ClassicSimilarity], result of:
          0.08778879 = score(doc=3195,freq=16.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.5928845 = fieldWeight in 3195, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3195)
      0.25 = coord(1/4)
    
    Abstract
    The scientific community's growing eagerness to make research data available to the public provides libraries - with our expertise in metadata and discovery - an interesting new opportunity. This paper details the in-house creation of a "data catalog" which describes datasets ranging from population-level studies like the US Census to small, specialized datasets created by researchers at our own institution. Based on Symfony2 and Solr, the data catalog provides a powerful search interface to help researchers locate the data that can help them, and an administrative interface so librarians can add, edit, and manage metadata elements at will. This paper will outline the successes, failures, and total redos that culminated in the current manifestation of our data catalog.
  9. Osinska, V.; Kowalska, M.; Osinski, Z.: ¬The role of visualization in the shaping and exploration of the individual information space : part 1 (2018) 0.02
    0.020863095 = product of:
      0.04172619 = sum of:
        0.02586502 = weight(_text_:data in 4641) [ClassicSimilarity], result of:
          0.02586502 = score(doc=4641,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.17468026 = fieldWeight in 4641, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4641)
        0.01586117 = product of:
          0.03172234 = sum of:
            0.03172234 = weight(_text_:22 in 4641) [ClassicSimilarity], result of:
              0.03172234 = score(doc=4641,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.19345059 = fieldWeight in 4641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4641)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Studies on the state and structure of digital knowledge concerning science generally relate to macro and meso scales. Supported by visualizations, these studies can deliver knowledge about emerging scientific fields or collaboration between countries, scientific centers, or groups of researchers. Analyses of individual activities or single scientific career paths are rarely presented and discussed. The authors decided to fill this gap and developed a web application for visualizing the scientific output of particular researchers. This free software based on bibliographic data from local databases, provides six layouts for analysis. Researchers can see the dynamic characteristics of their own writing activity, the time and place of publication, and the thematic scope of research problems. They can also identify cooperation networks, and consequently, study the dependencies and regularities in their own scientific activity. The current article presents the results of a study of the application's usability and functionality as well as attempts to define different user groups. A survey about the interface was sent to select researchers employed at Nicolaus Copernicus University. The results were used to answer the question as to whether such a specialized visualization tool can significantly augment the individual information space of the contemporary researcher.
    Date
    21.12.2018 17:22:13
  10. Bornmann, L.; Haunschild, R.: Overlay maps based on Mendeley data : the use of altmetrics for readership networks (2016) 0.02
    0.020529725 = product of:
      0.0821189 = sum of:
        0.0821189 = weight(_text_:data in 3230) [ClassicSimilarity], result of:
          0.0821189 = score(doc=3230,freq=14.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.55459267 = fieldWeight in 3230, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3230)
      0.25 = coord(1/4)
    
    Abstract
    Visualization of scientific results using networks has become popular in scientometric research. We provide base maps for Mendeley reader count data using the publication year 2012 from the Web of Science data. Example networks are shown and explained. The reader can use our base maps to visualize other results with the VOSViewer. The proposed overlay maps are able to show the impact of publications in terms of readership data. The advantage of using our base maps is that it is not necessary for the user to produce a network based on all data (e.g., from 1 year), but can collect the Mendeley data for a single institution (or journals, topics) and can match them with our already produced information. Generation of such large-scale networks is still a demanding task despite the available computer power and digital data availability. Therefore, it is very useful to have base maps and create the network with the overlay technique.
  11. Maaten, L. van den; Hinton, G.: Visualizing data using t-SNE (2008) 0.02
    0.019398764 = product of:
      0.077595055 = sum of:
        0.077595055 = weight(_text_:data in 3888) [ClassicSimilarity], result of:
          0.077595055 = score(doc=3888,freq=18.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.52404076 = fieldWeight in 3888, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3888)
      0.25 = coord(1/4)
    
    Abstract
    We present a new technique called "t-SNE" that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large data sets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of data sets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the data sets.
    Theme
    Data Mining
  12. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.02
    0.01900683 = product of:
      0.07602732 = sum of:
        0.07602732 = weight(_text_:data in 3884) [ClassicSimilarity], result of:
          0.07602732 = score(doc=3884,freq=12.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.513453 = fieldWeight in 3884, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046875 = fieldNorm(doc=3884)
      0.25 = coord(1/4)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
    Theme
    Data Mining
  13. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.02
    0.017156914 = product of:
      0.068627656 = sum of:
        0.068627656 = weight(_text_:data in 79) [ClassicSimilarity], result of:
          0.068627656 = score(doc=79,freq=22.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.46347913 = fieldWeight in 79, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.25 = coord(1/4)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Series
    Lecture notes on data engineering and communications technologies book series; vol.32
    Source
    Data visualization and knowledge engineering. Eds. J. Hemanth, et al
  14. Ekström, B.: Trace data visualisation enquiry : a methodological coupling for studying information practices in relation to information systems (2022) 0.02
    0.017108103 = product of:
      0.06843241 = sum of:
        0.06843241 = weight(_text_:data in 687) [ClassicSimilarity], result of:
          0.06843241 = score(doc=687,freq=14.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.46216056 = fieldWeight in 687, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0390625 = fieldNorm(doc=687)
      0.25 = coord(1/4)
    
    Abstract
    Purpose The purpose of this paper is to examine whether and how a methodological coupling of visualisations of trace data and interview methods can be utilised for information practices studies. Design/methodology/approach Trace data visualisation enquiry is suggested as the coupling of visualising exported data from an information system and using these visualisations as basis for interview guides and elicitation in information practices research. The methodology is illustrated and applied through a small-scale empirical study of a citizen science project. Findings The study found that trace data visualisation enquiry enabled fine-grained investigations of temporal aspects of information practices and to compare and explore temporal and geographical aspects of practices. Moreover, the methodology made possible inquiries for understanding information practices through trace data that were discussed through elicitation with participants. The study also found that it can aid a researcher of gaining a simultaneous overarching and close picture of information practices, which can lead to theoretical and methodological implications for information practices research. Originality/value Trace data visualisation enquiry extends current methods for investigating information practices as it enables focus to be placed on the traces of practices as recorded through interactions with information systems and study participants' accounts of activities.
  15. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.02
    0.016690476 = product of:
      0.03338095 = sum of:
        0.020692015 = weight(_text_:data in 2659) [ClassicSimilarity], result of:
          0.020692015 = score(doc=2659,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.1397442 = fieldWeight in 2659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.03125 = fieldNorm(doc=2659)
        0.012688936 = product of:
          0.025377871 = sum of:
            0.025377871 = weight(_text_:22 in 2659) [ClassicSimilarity], result of:
              0.025377871 = score(doc=2659,freq=2.0), product of:
                0.16398162 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046827413 = queryNorm
                0.15476047 = fieldWeight in 2659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2659)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The Library of Congress Subject Headings (LCSH) has been developed over the course of more than a century, predating the semantic web by some time. Until the 1986, the only concept-toconcept relationship available was an undifferentiated "See Also" reference, which was used for both associative (RT) and hierarchical (BT/NT) connections. In that year, in preparation for the first release of the headings in machine readable MARC Authorities form, an attempt was made to automatically convert these "See Also" links into the standardized thesaural relations. Unfortunately, the rule used to determine the type of reference to generate relied on the presence of symmetric links to detect associatively related terms; "See Also" references that were only present in one of the related terms were assumed to be hierarchical. This left the process vulnerable to inconsistent use of references in the pre-conversion data, with a marked bias towards promoting relationships to hierarchical status. The Library of Congress was aware that the results of the conversion contained many inconsistencies, and intended to validate and correct the results over the course of time. Unfortunately, twenty years later, less than 40% of the converted records have been evaluated. The converted records, being the earliest encountered during the Library's cataloging activities, represent the most basic concepts within LCSH; errors in the syndetic structure for these records affect far more subordinate concepts than those nearer the periphery. Worse, a policy of patterning new headings after pre-existing ones leads to structural errors arising from the conversion process being replicated in these newer headings, perpetuating and exacerbating the errors. As the LCSH prepares for its second great conversion, from MARC to SKOS, it is critical to address these structural problems. As part of the work on converting the headings into SKOS, I have experimented with different visualizations of the tangled web of broader terms embedded in LCSH. This poster illustrates several of these renderings, shows how they can help users to judge which relationships might not be correct, and shows just exactly how Doorbells and Mammals are related.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  16. Maaten, L. van den: Learning a parametric embedding by preserving local structure (2009) 0.02
    0.015679834 = product of:
      0.06271934 = sum of:
        0.06271934 = weight(_text_:data in 3883) [ClassicSimilarity], result of:
          0.06271934 = score(doc=3883,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.42357713 = fieldWeight in 3883, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3883)
      0.25 = coord(1/4)
    
    Abstract
    The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.
    Theme
    Data Mining
  17. Maaten, L. van den: Accelerating t-SNE using Tree-Based Algorithms (2014) 0.02
    0.015679834 = product of:
      0.06271934 = sum of:
        0.06271934 = weight(_text_:data in 3886) [ClassicSimilarity], result of:
          0.06271934 = score(doc=3886,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.42357713 = fieldWeight in 3886, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3886)
      0.25 = coord(1/4)
    
    Abstract
    The paper investigates the acceleration of t-SNE-an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots-using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N*logN). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant.
    Theme
    Data Mining
  18. Salaba, A.; Mercun, T.; Aalberg, T.: Complexity of work families and entity-based visualization displays (2018) 0.02
    0.015679834 = product of:
      0.06271934 = sum of:
        0.06271934 = weight(_text_:data in 5184) [ClassicSimilarity], result of:
          0.06271934 = score(doc=5184,freq=6.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.42357713 = fieldWeight in 5184, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5184)
      0.25 = coord(1/4)
    
    Abstract
    Conceptual modeling of bibliographic data, including the FR models and the consolidated IFLA LRM, has provided an opportunity to shift focus to entities and relationships and to support hierarchical work-based exploration of bibliographic information. This paper reports on a study examining the complexity of a work's bibliographic family data and user interactions with data visualizations, compared to traditional displays. Findings suggest that the FRBR-based visual bibliographic information system supports work families of different complexities more equally than a traditional system. Differences between the two systems also show that the FRBR-based system was more effective especially for related-works and author-related tasks.
  19. Lindstrom, N.; Malmsten, M.: Building interfaces on a networked graph (2015) 0.02
    0.015519011 = product of:
      0.062076043 = sum of:
        0.062076043 = weight(_text_:data in 2557) [ClassicSimilarity], result of:
          0.062076043 = score(doc=2557,freq=2.0), product of:
            0.14807065 = queryWeight, product of:
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.046827413 = queryNorm
            0.4192326 = fieldWeight in 2557, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1620505 = idf(docFreq=5088, maxDocs=44218)
              0.09375 = fieldNorm(doc=2557)
      0.25 = coord(1/4)
    
    Source
    Linked data and user interaction: the road ahead. Eds.: Cervone, H.F. u. L.G. Svensson
  20. Zhang, J.: TOFIR: A tool of facilitating information retrieval : introduce a visual retrieval model (2001) 0.01
    0.014837332 = product of:
      0.05934933 = sum of:
        0.05934933 = product of:
          0.11869866 = sum of:
            0.11869866 = weight(_text_:processing in 7711) [ClassicSimilarity], result of:
              0.11869866 = score(doc=7711,freq=2.0), product of:
                0.18956426 = queryWeight, product of:
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.046827413 = queryNorm
                0.6261658 = fieldWeight in 7711, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.048147 = idf(docFreq=2097, maxDocs=44218)
                  0.109375 = fieldNorm(doc=7711)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 37(2001) no.4, S.639-657

Years

Languages

  • e 62
  • a 1
  • d 1
  • More… Less…