Search (6 results, page 1 of 1)

  • × type_ss:"x"
  • × theme_ss:"Visualisierung"
  1. Haller, S.H.M.: Mappingverfahren zur Wissensorganisation (2002) 0.01
    0.009892989 = product of:
      0.029678967 = sum of:
        0.029678967 = product of:
          0.059357934 = sum of:
            0.059357934 = weight(_text_:22 in 3406) [ClassicSimilarity], result of:
              0.059357934 = score(doc=3406,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.38690117 = fieldWeight in 3406, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3406)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    30. 5.2010 16:22:35
  2. Jäger-Dengler-Harles, I.: Informationsvisualisierung und Retrieval im Fokus der Infromationspraxis (2013) 0.01
    0.0059357933 = product of:
      0.01780738 = sum of:
        0.01780738 = product of:
          0.03561476 = sum of:
            0.03561476 = weight(_text_:22 in 1709) [ClassicSimilarity], result of:
              0.03561476 = score(doc=1709,freq=2.0), product of:
                0.15341885 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043811057 = queryNorm
                0.23214069 = fieldWeight in 1709, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1709)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    4. 2.2015 9:22:39
  3. Eckert, K.: Thesaurus analysis and visualization in semantic search applications (2007) 0.00
    0.0038202507 = product of:
      0.011460752 = sum of:
        0.011460752 = product of:
          0.022921504 = sum of:
            0.022921504 = weight(_text_:of in 3222) [ClassicSimilarity], result of:
              0.022921504 = score(doc=3222,freq=30.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.33457235 = fieldWeight in 3222, product of:
                  5.477226 = tf(freq=30.0), with freq of:
                    30.0 = termFreq=30.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3222)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The use of thesaurus-based indexing is a common approach for increasing the performance of information retrieval. In this thesis, we examine the suitability of a thesaurus for a given set of information and evaluate improvements of existing thesauri to get better search results. On this area, we focus on two aspects: 1. We demonstrate an analysis of the indexing results achieved by an automatic document indexer and the involved thesaurus. 2. We propose a method for thesaurus evaluation which is based on a combination of statistical measures and appropriate visualization techniques that support the detection of potential problems in a thesaurus. In this chapter, we give an overview of the context of our work. Next, we briefly outline the basics of thesaurus-based information retrieval and describe the Collexis Engine that was used for our experiments. In Chapter 3, we describe two experiments in automatically indexing documents in the areas of medicine and economics with corresponding thesauri and compare the results to available manual annotations. Chapter 4 describes methods for assessing thesauri and visualizing the result in terms of a treemap. We depict examples of interesting observations supported by the method and show that we actually find critical problems. We conclude with a discussion of open questions and future research in Chapter 5.
  4. Munzner, T.: Interactive visualization of large graphs and networks (2000) 0.00
    0.0036161449 = product of:
      0.010848435 = sum of:
        0.010848435 = product of:
          0.02169687 = sum of:
            0.02169687 = weight(_text_:of in 4746) [ClassicSimilarity], result of:
              0.02169687 = score(doc=4746,freq=42.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.31669703 = fieldWeight in 4746, product of:
                  6.4807405 = tf(freq=42.0), with freq of:
                    42.0 = termFreq=42.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4746)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.
  5. Rohner, M.: Betrachtung der Data Visualization Literacy in der angestrebten Schweizer Informationsgesellschaft (2018) 0.00
    0.0011836614 = product of:
      0.0035509842 = sum of:
        0.0035509842 = product of:
          0.0071019684 = sum of:
            0.0071019684 = weight(_text_:of in 4585) [ClassicSimilarity], result of:
              0.0071019684 = score(doc=4585,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.103663445 = fieldWeight in 4585, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4585)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Diese Publikation entstand im Rahmen einer Thesis zum Master of Science FHO in Business Administration, Major Information and Data Management.
  6. Maas, J.F.: SWD-Explorer : Design und Implementation eines Software-Tools zur erweiterten Suche und grafischen Navigation in der Schlagwortnormdatei (2010) 0.00
    9.863845E-4 = product of:
      0.0029591534 = sum of:
        0.0029591534 = product of:
          0.0059183068 = sum of:
            0.0059183068 = weight(_text_:of in 4035) [ClassicSimilarity], result of:
              0.0059183068 = score(doc=4035,freq=2.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.086386204 = fieldWeight in 4035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4035)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Die Schlagwortnormdatei (SWD) stellt als kooperativ erstelltes, kontrolliertes Vokabular ein aus dem deutschsprachigen Raum nicht mehr wegzudenkendes Mittel zur Verschlagwortung von Medien dar. Die SWD dient primär der Vereinheitlichung der Verschlagwortung. Darüber hinaus sind in der Struktur der SWD Relationen zwischen Schlagwörtern definiert, die eine gut vorbereitete Suche stark erleichtern können. Beispiel für solche Relationen sind die Unterbegriff-/Oberbegriffrelationen (Hyponym/Hyperonym) oder die Relation der Ähnlichkeit von Begriffen. Diese Arbeit unternimmt den Versuch, durch die Erstellung eines Such- und Visualisierungstools den Umgang mit der SWD zu erleichtern. Im Fokus der Arbeit steht dabei zum einen die Aufgabe des Fachreferenten, ein Medium geeignet zu verschlagworten. Diese Aufgabe soll durch die Optimierung der technischen Suchmöglichkeiten mit Hilfe von Schlagwörtern geschehen, z.B. durch die Suche mit Hilfe Regulärer Ausdrücke oder durch die Suche entlang der hierarchischen Relationen. Zum anderen sind die beschriebenen Relationen innerhalb der SWD oft unsauber spezifiziert, was ein negativer Seiteneffekt der interdisziplinären und kooperativen Erstellung der SWD ist. Es wird gezeigt, dass durch geeignete Visualisierung viele Fehler schnell auffindbar und korrigierbar sind, was die Aufgabe der Datenpflege um ein Vielfaches vereinfacht. Diese Veröffentlichung geht zurück auf eine Master-Arbeit im postgradualen Fernstudiengang Master of Arts (Library and Information Science) an der Humboldt-Universität zu Berlin.