Search (56 results, page 1 of 3)

  • × theme_ss:"Visualisierung"
  1. Wattenberg, M.; Viégas, F.; Johnson, I.: How to use t-SNE effectively (2016) 0.06
    0.059649743 = product of:
      0.08947461 = sum of:
        0.06135368 = product of:
          0.12270736 = sum of:
            0.12270736 = weight(_text_:t in 3887) [ClassicSimilarity], result of:
              0.12270736 = score(doc=3887,freq=8.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.69639564 = fieldWeight in 3887, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3887)
          0.5 = coord(1/2)
        0.028120931 = product of:
          0.056241862 = sum of:
            0.056241862 = weight(_text_:i in 3887) [ClassicSimilarity], result of:
              0.056241862 = score(doc=3887,freq=2.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.33337694 = fieldWeight in 3887, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3887)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Although extremely useful for visualizing high-dimensional data, t-SNE plots can sometimes be mysterious or misleading. By exploring how it behaves in simple cases, we can learn to use it more effectively. We'll walk through a series of simple examples to illustrate what t-SNE diagrams can and cannot show. The t-SNE technique really is useful-but only if you know how to interpret it.
  2. Information visualization : human-centered issues and perspectives (2008) 0.03
    0.03464692 = product of:
      0.051970378 = sum of:
        0.02711475 = product of:
          0.0542295 = sum of:
            0.0542295 = weight(_text_:t in 3285) [ClassicSimilarity], result of:
              0.0542295 = score(doc=3285,freq=4.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.3077663 = fieldWeight in 3285, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3285)
          0.5 = coord(1/2)
        0.024855627 = product of:
          0.049711253 = sum of:
            0.049711253 = weight(_text_:i in 3285) [ClassicSimilarity], result of:
              0.049711253 = score(doc=3285,freq=4.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.29466638 = fieldWeight in 3285, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3285)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This book is the outcome of the Dagstuhl Seminar on "Information Visualization - Human-Centered Issues in Visual Representation, Interaction, and Evaluation" held at Dagstuhl Castle, Germany, from May 28 to June 1, 2007. Information Visualization (InfoVis) is a relatively new research area, which focuses on the use of visualization techniques to help people understand and analyze data.This book documents and extends the findings and discussions of the various sessions in detail. The seven contributions cover the most important topics: Part I is on general reflections on the value of information visualization; evaluating information visualizations; theoretical foundations of information visualization; teaching information visualization. Part II deals with specific aspects on creation and collaboration: engaging new audiences for information visualization; process and pitfalls in writing information visualization research papers; and visual analytics: definition, process, and challenges.
    Content
    Inhalt: Part I. General Reflections The Value of Information Visualization / Jean-Daniel Fekete, Jarke J van Wijk, John T. Stasko, Chris North Evaluating Information Visualizations / Sheelagh Carpendale Theoretical Foundations of Information Visualization / Helen C. Purchase, Natalia Andrienko, T.J. Jankun-Kelly, Matthew Ward Teaching Information Visualization / Andreas Kerren, John T. Stasko, Jason Dykes Part II. Specific Aspects Creation and Collaboration: Engaging New Audiences for Information Visualization / Jeffrey Heer, Frank van Ham, Sheelagh Carpendale, Chris Weaver, Petra Isenberg Process and Pitfalls in Writing Information Visualization Research Papers / Tamara Munzner Visual Analytics: Definition, Process, and Challenges / Daniel Keim, Gennady Andrienko, Jean-Daniel Fekete, Carsten Görg, Jörn Kohlhammer, Guy Melancon
  3. Samoylenko, I.; Chao, T.-C.; Liu, W.-C.; Chen, C.-M.: Visualizing the scientific world and its evolution (2006) 0.03
    0.029398886 = product of:
      0.04409833 = sum of:
        0.02300763 = product of:
          0.04601526 = sum of:
            0.04601526 = weight(_text_:t in 5911) [ClassicSimilarity], result of:
              0.04601526 = score(doc=5911,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.26114836 = fieldWeight in 5911, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5911)
          0.5 = coord(1/2)
        0.0210907 = product of:
          0.0421814 = sum of:
            0.0421814 = weight(_text_:i in 5911) [ClassicSimilarity], result of:
              0.0421814 = score(doc=5911,freq=2.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.25003272 = fieldWeight in 5911, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5911)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  4. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.03
    0.027458586 = product of:
      0.04118788 = sum of:
        0.02300763 = product of:
          0.04601526 = sum of:
            0.04601526 = weight(_text_:t in 3693) [ClassicSimilarity], result of:
              0.04601526 = score(doc=3693,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.26114836 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
        0.01818025 = product of:
          0.0363605 = sum of:
            0.0363605 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.0363605 = score(doc=3693,freq=2.0), product of:
                0.1566313 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04472842 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Generally, Computer Science (CS) classifications are inconsistent in taxonomy strategies. t is necessary to develop CS taxonomy research to combine its historical perspective, its current knowledge and its predicted future trends - including all breakthroughs in information and communication technology. In this paper we have analyzed the ACM Computing Classification System (CCS) by means of visualization maps. The important achievement of current work is an effective visualization of classified documents from the ACM Digital Library. From the technical point of view, the innovation lies in the parallel use of analysis units: (sub)classes and keywords as well as a spherical 3D information surface. We have compared both the thematic and semantic maps of classified documents and results presented in Table 1. Furthermore, the proposed new method is used for content-related evaluation of the original scheme. Summing up: we improved an original ACM classification in the Computer Science domain by means of visualization.
    Date
    22. 7.2010 19:36:46
  5. Jäger-Dengler-Harles, I.: Informationsvisualisierung und Retrieval im Fokus der Infromationspraxis (2013) 0.03
    0.026180632 = product of:
      0.0785419 = sum of:
        0.0785419 = sum of:
          0.0421814 = weight(_text_:i in 1709) [ClassicSimilarity], result of:
            0.0421814 = score(doc=1709,freq=2.0), product of:
              0.16870351 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.04472842 = queryNorm
              0.25003272 = fieldWeight in 1709, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.046875 = fieldNorm(doc=1709)
          0.0363605 = weight(_text_:22 in 1709) [ClassicSimilarity], result of:
            0.0363605 = score(doc=1709,freq=2.0), product of:
              0.1566313 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04472842 = queryNorm
              0.23214069 = fieldWeight in 1709, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=1709)
      0.33333334 = coord(1/3)
    
    Date
    4. 2.2015 9:22:39
  6. Wu, K.-C.; Hsieh, T.-Y.: Affective choosing of clustering and categorization representations in e-book interfaces (2016) 0.02
    0.022882156 = product of:
      0.034323234 = sum of:
        0.019173026 = product of:
          0.038346052 = sum of:
            0.038346052 = weight(_text_:t in 3070) [ClassicSimilarity], result of:
              0.038346052 = score(doc=3070,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.21762364 = fieldWeight in 3070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3070)
          0.5 = coord(1/2)
        0.015150209 = product of:
          0.030300418 = sum of:
            0.030300418 = weight(_text_:22 in 3070) [ClassicSimilarity], result of:
              0.030300418 = score(doc=3070,freq=2.0), product of:
                0.1566313 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04472842 = queryNorm
                0.19345059 = fieldWeight in 3070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3070)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Date
    20. 1.2015 18:30:22
  7. Wu, I.-C.; Vakkari, P.: Effects of subject-oriented visualization tools on search by novices and intermediates (2018) 0.02
    0.021817192 = product of:
      0.06545158 = sum of:
        0.06545158 = sum of:
          0.03515116 = weight(_text_:i in 4573) [ClassicSimilarity], result of:
            0.03515116 = score(doc=4573,freq=2.0), product of:
              0.16870351 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.04472842 = queryNorm
              0.20836058 = fieldWeight in 4573, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4573)
          0.030300418 = weight(_text_:22 in 4573) [ClassicSimilarity], result of:
            0.030300418 = score(doc=4573,freq=2.0), product of:
              0.1566313 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04472842 = queryNorm
              0.19345059 = fieldWeight in 4573, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4573)
      0.33333334 = coord(1/3)
    
    Date
    9.12.2018 16:22:25
  8. Maaten, L. van den: Accelerating t-SNE using Tree-Based Algorithms (2014) 0.02
    0.020007022 = product of:
      0.06002106 = sum of:
        0.06002106 = product of:
          0.12004212 = sum of:
            0.12004212 = weight(_text_:t in 3886) [ClassicSimilarity], result of:
              0.12004212 = score(doc=3886,freq=10.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.6812697 = fieldWeight in 3886, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3886)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper investigates the acceleration of t-SNE-an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots-using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N*logN). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant.
  9. Maaten, L. van den: Learning a parametric embedding by preserving local structure (2009) 0.02
    0.017894823 = product of:
      0.05368447 = sum of:
        0.05368447 = product of:
          0.10736894 = sum of:
            0.10736894 = weight(_text_:t in 3883) [ClassicSimilarity], result of:
              0.10736894 = score(doc=3883,freq=8.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.60934615 = fieldWeight in 3883, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3883)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.
  10. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.02
    0.017453756 = product of:
      0.052361265 = sum of:
        0.052361265 = sum of:
          0.028120931 = weight(_text_:i in 2659) [ClassicSimilarity], result of:
            0.028120931 = score(doc=2659,freq=2.0), product of:
              0.16870351 = queryWeight, product of:
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.04472842 = queryNorm
              0.16668847 = fieldWeight in 2659, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.7717297 = idf(docFreq=2765, maxDocs=44218)
                0.03125 = fieldNorm(doc=2659)
          0.024240334 = weight(_text_:22 in 2659) [ClassicSimilarity], result of:
            0.024240334 = score(doc=2659,freq=2.0), product of:
              0.1566313 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04472842 = queryNorm
              0.15476047 = fieldWeight in 2659, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=2659)
      0.33333334 = coord(1/3)
    
    Abstract
    The Library of Congress Subject Headings (LCSH) has been developed over the course of more than a century, predating the semantic web by some time. Until the 1986, the only concept-toconcept relationship available was an undifferentiated "See Also" reference, which was used for both associative (RT) and hierarchical (BT/NT) connections. In that year, in preparation for the first release of the headings in machine readable MARC Authorities form, an attempt was made to automatically convert these "See Also" links into the standardized thesaural relations. Unfortunately, the rule used to determine the type of reference to generate relied on the presence of symmetric links to detect associatively related terms; "See Also" references that were only present in one of the related terms were assumed to be hierarchical. This left the process vulnerable to inconsistent use of references in the pre-conversion data, with a marked bias towards promoting relationships to hierarchical status. The Library of Congress was aware that the results of the conversion contained many inconsistencies, and intended to validate and correct the results over the course of time. Unfortunately, twenty years later, less than 40% of the converted records have been evaluated. The converted records, being the earliest encountered during the Library's cataloging activities, represent the most basic concepts within LCSH; errors in the syndetic structure for these records affect far more subordinate concepts than those nearer the periphery. Worse, a policy of patterning new headings after pre-existing ones leads to structural errors arising from the conversion process being replicated in these newer headings, perpetuating and exacerbating the errors. As the LCSH prepares for its second great conversion, from MARC to SKOS, it is critical to address these structural problems. As part of the work on converting the headings into SKOS, I have experimented with different visualizations of the tangled web of broader terms embedded in LCSH. This poster illustrates several of these renderings, shows how they can help users to judge which relationships might not be correct, and shows just exactly how Doorbells and Mammals are related.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  11. Wilson, M.: Interfaces for information retrieval (2011) 0.02
    0.016403876 = product of:
      0.04921163 = sum of:
        0.04921163 = product of:
          0.09842326 = sum of:
            0.09842326 = weight(_text_:i in 549) [ClassicSimilarity], result of:
              0.09842326 = score(doc=549,freq=2.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.58340967 = fieldWeight in 549, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.109375 = fieldNorm(doc=549)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Interactive information seeking, behaviour and retrieval. Eds.: Ruthven, I. u. D. Kelly
  12. Maaten, L. van den; Hinton, G.: Visualizing data using t-SNE (2008) 0.02
    0.01565471 = product of:
      0.046964128 = sum of:
        0.046964128 = product of:
          0.093928255 = sum of:
            0.093928255 = weight(_text_:t in 3888) [ClassicSimilarity], result of:
              0.093928255 = score(doc=3888,freq=12.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.53306687 = fieldWeight in 3888, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3888)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    We present a new technique called "t-SNE" that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large data sets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of data sets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the data sets.
  13. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.01
    0.013283461 = product of:
      0.039850384 = sum of:
        0.039850384 = product of:
          0.07970077 = sum of:
            0.07970077 = weight(_text_:t in 3884) [ClassicSimilarity], result of:
              0.07970077 = score(doc=3884,freq=6.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.45232224 = fieldWeight in 3884, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3884)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
  14. Salaba, A.; Mercun, T.; Aalberg, T.: Complexity of work families and entity-based visualization displays (2018) 0.01
    0.012653551 = product of:
      0.037960652 = sum of:
        0.037960652 = product of:
          0.075921305 = sum of:
            0.075921305 = weight(_text_:t in 5184) [ClassicSimilarity], result of:
              0.075921305 = score(doc=5184,freq=4.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.4308728 = fieldWeight in 5184, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5184)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  15. IEEE symposium on information visualization 2003 : Seattle, Washington, October 19 - 21, 2003 ; InfoVis 2003. Proceedings (2003) 0.01
    0.010225614 = product of:
      0.03067684 = sum of:
        0.03067684 = product of:
          0.06135368 = sum of:
            0.06135368 = weight(_text_:t in 1455) [ClassicSimilarity], result of:
              0.06135368 = score(doc=1455,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.34819782 = fieldWeight in 1455, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1455)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Editor
    Munzner, T. u. S. North
  16. Tscherteu, G.; Langreiter, C.: Explorative Netzwerkanalyse im Living Web (2009) 0.01
    0.010225614 = product of:
      0.03067684 = sum of:
        0.03067684 = product of:
          0.06135368 = sum of:
            0.06135368 = weight(_text_:t in 4870) [ClassicSimilarity], result of:
              0.06135368 = score(doc=4870,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.34819782 = fieldWeight in 4870, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4870)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Social Semantic Web: Web 2.0, was nun? Hrsg.: A. Blumauer u. T. Pellegrini
  17. Christoforidis, A.; Heuwing, B.; Mandl, T.: Visualising topics in document collections : an analysis of the interpretation process of historians (2017) 0.01
    0.010225614 = product of:
      0.03067684 = sum of:
        0.03067684 = product of:
          0.06135368 = sum of:
            0.06135368 = weight(_text_:t in 3555) [ClassicSimilarity], result of:
              0.06135368 = score(doc=3555,freq=2.0), product of:
                0.17620352 = queryWeight, product of:
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.04472842 = queryNorm
                0.34819782 = fieldWeight in 3555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9394085 = idf(docFreq=2338, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3555)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  18. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.010100139 = product of:
      0.030300418 = sum of:
        0.030300418 = product of:
          0.060600836 = sum of:
            0.060600836 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.060600836 = score(doc=3406,freq=2.0), product of:
                0.1566313 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04472842 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    30. 5.2010 16:22:35
  19. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.01
    0.010100139 = product of:
      0.030300418 = sum of:
        0.030300418 = product of:
          0.060600836 = sum of:
            0.060600836 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.060600836 = score(doc=2755,freq=2.0), product of:
                0.1566313 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04472842 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    1. 2.2016 18:25:22
  20. Braun, S.: Manifold: a custom analytics platform to visualize research impact (2015) 0.01
    0.009942251 = product of:
      0.029826753 = sum of:
        0.029826753 = product of:
          0.059653506 = sum of:
            0.059653506 = weight(_text_:i in 2906) [ClassicSimilarity], result of:
              0.059653506 = score(doc=2906,freq=4.0), product of:
                0.16870351 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.04472842 = queryNorm
                0.35359967 = fieldWeight in 2906, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2906)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The use of research impact metrics and analytics has become an integral component to many aspects of institutional assessment. Many platforms currently exist to provide such analytics, both proprietary and open source; however, the functionality of these systems may not always overlap to serve uniquely specific needs. In this paper, I describe a novel web-based platform, named Manifold, that I built to serve custom research impact assessment needs in the University of Minnesota Medical School. Built on a standard LAMP architecture, Manifold automatically pulls publication data for faculty from Scopus through APIs, calculates impact metrics through automated analytics, and dynamically generates report-like profiles that visualize those metrics. Work on this project has resulted in many lessons learned about challenges to sustainability and scalability in developing a system of such magnitude.

Years

Languages

  • e 44
  • d 11
  • a 1
  • More… Less…

Types

  • a 44
  • el 12
  • m 8
  • x 3
  • s 2
  • More… Less…