Search (131 results, page 1 of 7)

  • × language_ss:"e"
  • × theme_ss:"Visualisierung"
  1. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.03
    0.02723573 = product of:
      0.05447146 = sum of:
        0.009274333 = weight(_text_:in in 3355) [ClassicSimilarity], result of:
          0.009274333 = score(doc=3355,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 3355, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.020103889 = weight(_text_:und in 3355) [ClassicSimilarity], result of:
          0.020103889 = score(doc=3355,freq=4.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.20778441 = fieldWeight in 3355, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.025093239 = product of:
          0.050186478 = sum of:
            0.050186478 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.050186478 = score(doc=3355,freq=4.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    BK
    02.10 Wissenschaft und Gesellschaft
    Classification
    02.10 Wissenschaft und Gesellschaft
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
    Footnote
    Rez. in: JASIST 67(2017) no.2, S.533-536 (White, H.D.).
    LCSH
    Communication in science / Data processing
    Subject
    Communication in science / Data processing
  2. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.02
    0.018924978 = product of:
      0.037849955 = sum of:
        0.006310384 = weight(_text_:in in 1397) [ClassicSimilarity], result of:
          0.006310384 = score(doc=1397,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 1397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.01675324 = weight(_text_:und in 1397) [ClassicSimilarity], result of:
          0.01675324 = score(doc=1397,freq=4.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.17315367 = fieldWeight in 1397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
              0.029572664 = score(doc=1397,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    The "Screen Design Manual" provides designers of interactive media with a practical working guide for preparing and presenting information that is suitable for both their target groups and the media they are using. It describes background information and relationships, clarifies them with the help of examples, and encourages further development of the language of digital media. In addition to the basics of the psychology of perception and learning, ergonomics, communication theory, imagery research, and aesthetics, the book also explores the design of navigation and orientation elements. Guidelines and checklists, along with the unique presentation of the book, support the application of information in practice.
    Classification
    ST 253 Informatik / Monographien / Software und -entwicklung / Web-Programmierwerkzeuge (A-Z)
    Date
    22. 3.2008 14:29:25
    RVK
    ST 253 Informatik / Monographien / Software und -entwicklung / Web-Programmierwerkzeuge (A-Z)
  3. Bekavac, B.; Herget, J.; Hierl, S.; Öttl, S.: Visualisierungskomponenten bei webbasierten Suchmaschinen : Methoden, Kriterien und ein Marktüberblick (2007) 0.02
    0.016708793 = product of:
      0.050126377 = sum of:
        0.006246961 = weight(_text_:in in 399) [ClassicSimilarity], result of:
          0.006246961 = score(doc=399,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10520181 = fieldWeight in 399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
        0.043879416 = weight(_text_:und in 399) [ClassicSimilarity], result of:
          0.043879416 = score(doc=399,freq=14.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.4535172 = fieldWeight in 399, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
      0.33333334 = coord(2/6)
    
    Abstract
    Bei webbasierten Suchmaschinen werden zunehmend auch Systeme mit Visualisierungskomponenten für die Ergebnisrepräsentation angeboten. Die Ansätze der Visualisierungen unterscheiden sich hierbei in der Zielsetzung und Ausführung deutlich voneinander. Der folgende Beitrag beschreibt die verwendeten Visualisierungsmethoden, systematisiert diese anhand einer Klassifikation, stellt die führenden frei zugänglichen Systeme vor und vergleicht diese anhand der Kriterien aus der Systematisierung. Die typischen Problemfelder werden identifiziert und die wichtigsten Gemeinsamkeiten und Unterschiede der untersuchten Systeme herausgearbeitet. Die Vorstellung zweier innovativer Visualisierungskonzepte im Bereich der Relationenvisualisierung innerhalb von Treffermengen und der Visualisierung von Relationen bei der Suche nach Musik schließen den Beitrag ab.
    Source
    Information - Wissenschaft und Praxis. 58(2007) H.3, S.149-158
  4. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.01
    0.014064478 = product of:
      0.04219343 = sum of:
        0.012620768 = weight(_text_:in in 2755) [ClassicSimilarity], result of:
          0.012620768 = score(doc=2755,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 2755, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2755)
        0.029572664 = product of:
          0.059145328 = sum of:
            0.059145328 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.059145328 = score(doc=2755,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    1. 2.2016 18:25:22
    Series
    Lecture notes in computer science ; 9398
  5. Representation in scientific practice revisited (2014) 0.01
    0.010608278 = product of:
      0.031824835 = sum of:
        0.0128707085 = weight(_text_:in in 3543) [ClassicSimilarity], result of:
          0.0128707085 = score(doc=3543,freq=26.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2167489 = fieldWeight in 3543, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3543)
        0.018954126 = weight(_text_:und in 3543) [ClassicSimilarity], result of:
          0.018954126 = score(doc=3543,freq=8.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.19590102 = fieldWeight in 3543, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3543)
      0.33333334 = coord(2/6)
    
    Abstract
    Representation in Scientific Practice, published by the MIT Press in 1990, helped coalesce a long-standing interest in scientific visualization among historians, philosophers, and sociologists of science and remains a touchstone for current investigations in science and technology studies. This volume revisits the topic, taking into account both the changing conceptual landscape of STS and the emergence of new imaging technologies in scientific practice. It offers cutting-edge research on a broad array of fields that study information as well as short reflections on the evolution of the field by leading scholars, including some of the contributors to the 1990 volume. The essays consider the ways in which viewing experiences are crafted in the digital era; the embodied nature of work with digital technologies; the constitutive role of materials and technologies -- from chalkboards to brain scans -- in the production of new scientific knowledge; the metaphors and images mobilized by communities of practice; and the status and significance of scientific imagery in professional and popular culture. ContributorsMorana Alac, Michael Barany, Anne Beaulieu, Annamaria Carusi, Catelijne Coopmans, Lorraine Daston, Sarah de Rijcke, Joseph Dumit, Emma Frow, Yann Giraud, Aud Sissel Hoel, Martin Kemp, Bruno Latour, John Law, Michael Lynch, Donald MacKenzie, Cyrus Mody, Natasha Myers, Rachel Prentice, Arie Rip, Martin Ruivenkamp, Lucy Suchman, Janet Vertesi, Steve Woolgar
    BK
    30.02 Philosophie und Theorie der Naturwissenschaften
    30.03 Methoden und Techniken in den Naturwissenschaften
    Classification
    30.02 Philosophie und Theorie der Naturwissenschaften
    30.03 Methoden und Techniken in den Naturwissenschaften
    Footnote
    Rez. in: JASIST 68(2017) no.4, S.1068-1069 (Hans-Jörg Rheinberger)
  6. Neubauer, G.: Visualization of typed links in linked data (2017) 0.01
    0.010473757 = product of:
      0.03142127 = sum of:
        0.007728611 = weight(_text_:in in 3912) [ClassicSimilarity], result of:
          0.007728611 = score(doc=3912,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 3912, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.02369266 = weight(_text_:und in 3912) [ClassicSimilarity], result of:
          0.02369266 = score(doc=3912,freq=8.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.24487628 = fieldWeight in 3912, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
      0.33333334 = coord(2/6)
    
    Abstract
    Das Themengebiet der Arbeit behandelt Visualisierungen von typisierten Links in Linked Data. Die wissenschaftlichen Gebiete, die im Allgemeinen den Inhalt des Beitrags abgrenzen, sind das Semantic Web, das Web of Data und Informationsvisualisierung. Das Semantic Web, das von Tim Berners Lee 2001 erfunden wurde, stellt eine Erweiterung zum World Wide Web (Web 2.0) dar. Aktuelle Forschungen beziehen sich auf die Verknüpfbarkeit von Informationen im World Wide Web. Um es zu ermöglichen, solche Verbindungen wahrnehmen und verarbeiten zu können sind Visualisierungen die wichtigsten Anforderungen als Hauptteil der Datenverarbeitung. Im Zusammenhang mit dem Sematic Web werden Repräsentationen von zusammenhängenden Informationen anhand von Graphen gehandhabt. Der Grund des Entstehens dieser Arbeit ist in erster Linie die Beschreibung der Gestaltung von Linked Data-Visualisierungskonzepten, deren Prinzipien im Rahmen einer theoretischen Annäherung eingeführt werden. Anhand des Kontexts führt eine schrittweise Erweiterung der Informationen mit dem Ziel, praktische Richtlinien anzubieten, zur Vernetzung dieser ausgearbeiteten Gestaltungsrichtlinien. Indem die Entwürfe zweier alternativer Visualisierungen einer standardisierten Webapplikation beschrieben werden, die Linked Data als Netzwerk visualisiert, konnte ein Test durchgeführt werden, der deren Kompatibilität zum Inhalt hatte. Der praktische Teil behandelt daher die Designphase, die Resultate, und zukünftige Anforderungen des Projektes, die durch die Testung ausgearbeitet wurden.
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 70(2017) H.2, S.179-199
  7. Pfeffer, M.; Eckert, K.; Stuckenschmidt, H.: Visual analysis of classification systems and library collections (2008) 0.01
    0.010439968 = product of:
      0.0313199 = sum of:
        0.012365777 = weight(_text_:in in 317) [ClassicSimilarity], result of:
          0.012365777 = score(doc=317,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2082456 = fieldWeight in 317, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=317)
        0.018954126 = weight(_text_:und in 317) [ClassicSimilarity], result of:
          0.018954126 = score(doc=317,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.19590102 = fieldWeight in 317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=317)
      0.33333334 = coord(2/6)
    
    Abstract
    In this demonstration we present a visual analysis approach that addresses both developers and users of hierarchical classification systems. The approach supports an intuitive understanding of the structure and current use in relation to a specific collection. We will also demonstrate its application for the development and management of library collections.
    Series
    Lecture notes in computer science ; 5173
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  8. Eckert, K: ¬The ICE-map visualization (2011) 0.01
    0.010439968 = product of:
      0.0313199 = sum of:
        0.012365777 = weight(_text_:in in 4743) [ClassicSimilarity], result of:
          0.012365777 = score(doc=4743,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2082456 = fieldWeight in 4743, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4743)
        0.018954126 = weight(_text_:und in 4743) [ClassicSimilarity], result of:
          0.018954126 = score(doc=4743,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.19590102 = fieldWeight in 4743, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4743)
      0.33333334 = coord(2/6)
    
    Abstract
    In this paper, we describe in detail the Information Content Evaluation Map (ICE-Map Visualization, formerly referred to as IC Difference Analysis). The ICE-Map Visualization is a visual data mining approach for all kinds of concept hierarchies that uses statistics about the concept usage to help a user in the evaluation and maintenance of the hierarchy. It consists of a statistical framework that employs the the notion of information content from information theory, as well as a visualization of the hierarchy and the result of the statistical analysis by means of a treemap.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  9. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.01
    0.010286495 = product of:
      0.030859483 = sum of:
        0.013115887 = weight(_text_:in in 3693) [ClassicSimilarity], result of:
          0.013115887 = score(doc=3693,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 3693, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3693)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.035487194 = score(doc=3693,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Generally, Computer Science (CS) classifications are inconsistent in taxonomy strategies. t is necessary to develop CS taxonomy research to combine its historical perspective, its current knowledge and its predicted future trends - including all breakthroughs in information and communication technology. In this paper we have analyzed the ACM Computing Classification System (CCS) by means of visualization maps. The important achievement of current work is an effective visualization of classified documents from the ACM Digital Library. From the technical point of view, the innovation lies in the parallel use of analysis units: (sub)classes and keywords as well as a spherical 3D information surface. We have compared both the thematic and semantic maps of classified documents and results presented in Table 1. Furthermore, the proposed new method is used for content-related evaluation of the original scheme. Summing up: we improved an original ACM classification in the Computer Science domain by means of visualization.
    Date
    22. 7.2010 19:36:46
  10. Eckert, K.: Thesaurus analysis and visualization in semantic search applications (2007) 0.01
    0.010046529 = product of:
      0.030139584 = sum of:
        0.0133863455 = weight(_text_:in in 3222) [ClassicSimilarity], result of:
          0.0133863455 = score(doc=3222,freq=18.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22543246 = fieldWeight in 3222, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3222)
        0.01675324 = weight(_text_:und in 3222) [ClassicSimilarity], result of:
          0.01675324 = score(doc=3222,freq=4.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.17315367 = fieldWeight in 3222, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3222)
      0.33333334 = coord(2/6)
    
    Abstract
    The use of thesaurus-based indexing is a common approach for increasing the performance of information retrieval. In this thesis, we examine the suitability of a thesaurus for a given set of information and evaluate improvements of existing thesauri to get better search results. On this area, we focus on two aspects: 1. We demonstrate an analysis of the indexing results achieved by an automatic document indexer and the involved thesaurus. 2. We propose a method for thesaurus evaluation which is based on a combination of statistical measures and appropriate visualization techniques that support the detection of potential problems in a thesaurus. In this chapter, we give an overview of the context of our work. Next, we briefly outline the basics of thesaurus-based information retrieval and describe the Collexis Engine that was used for our experiments. In Chapter 3, we describe two experiments in automatically indexing documents in the areas of medicine and economics with corresponding thesauri and compare the results to available manual annotations. Chapter 4 describes methods for assessing thesauri and visualizing the result in terms of a treemap. We depict examples of interesting observations supported by the method and show that we actually find critical problems. We conclude with a discussion of open questions and future research in Chapter 5.
    Imprint
    Mannheim : Fakultät für Mathematik und Informatik
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  11. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.01
    0.009484224 = product of:
      0.028452672 = sum of:
        0.010709076 = weight(_text_:in in 1289) [ClassicSimilarity], result of:
          0.010709076 = score(doc=1289,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 1289, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1289)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.035487194 = score(doc=1289,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    QVIZ will research and create a framework for visualizing and querying archival resources by a time-space interface based on maps and emergent knowledge structures. The framework will also integrate social software, such as wikis, in order to utilize knowledge in existing and new communities of practice. QVIZ will lead to improved information sharing and knowledge creation, easier access to information in a user-adapted context and innovative ways of exploring and visualizing materials over time, between countries and other administrative units. The common European framework for sharing and accessing archival information provided by the QVIZ project will open a considerably larger commercial market based on archival materials as well as a richer understanding of European history.
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  12. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.01
    0.0091357 = product of:
      0.027407099 = sum of:
        0.012620768 = weight(_text_:in in 5272) [ClassicSimilarity], result of:
          0.012620768 = score(doc=5272,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 5272, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5272)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
              0.029572664 = score(doc=5272,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 5272, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5272)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
    Date
    22. 7.2006 16:11:05
  13. Eckert, K.; Pfeffer, M.; Stuckenschmidt, H.: Assessing thesaurus-based annotations for semantic search applications (2008) 0.01
    0.0091349725 = product of:
      0.027404916 = sum of:
        0.010820055 = weight(_text_:in in 1528) [ClassicSimilarity], result of:
          0.010820055 = score(doc=1528,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1822149 = fieldWeight in 1528, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1528)
        0.01658486 = weight(_text_:und in 1528) [ClassicSimilarity], result of:
          0.01658486 = score(doc=1528,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.17141339 = fieldWeight in 1528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1528)
      0.33333334 = coord(2/6)
    
    Abstract
    Statistical methods for automated document indexing are becoming an alternative to the manual assignment of keywords. We argue that the quality of the thesaurus used as a basis for indexing in regard to its ability to adequately cover the contents to be indexed and as a basis for the specific indexing method used is of crucial importance in automatic indexing. We present an interactive tool for thesaurus evaluation that is based on a combination of statistical measures and appropriate visualisation techniques that supports the detection of potential problems in a thesaurus. We describe the methods used and show that the tool supports the detection and correction of errors, leading to a better indexing result.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  14. Kraker, P.; Kittel, C,; Enkhbayar, A.: Open Knowledge Maps : creating a visual interface to the world's scientific knowledge based on natural language processing (2016) 0.01
    0.0091104945 = product of:
      0.027331483 = sum of:
        0.013115887 = weight(_text_:in in 3205) [ClassicSimilarity], result of:
          0.013115887 = score(doc=3205,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 3205, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
        0.014215595 = weight(_text_:und in 3205) [ClassicSimilarity], result of:
          0.014215595 = score(doc=3205,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 3205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
      0.33333334 = coord(2/6)
    
    Abstract
    The goal of Open Knowledge Maps is to create a visual interface to the world's scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps.
    Content
    Beitrag in einem Themenschwerpunkt 'Computerlinguistik und Bibliotheken'. Vgl.: http://0277.ch/ojs/index.php/cdrs_0277/article/view/157/355.
  15. Tufte, E.R.: Envisioning information (1990) 0.01
    0.008486142 = product of:
      0.025458427 = sum of:
        0.005354538 = weight(_text_:in in 3733) [ClassicSimilarity], result of:
          0.005354538 = score(doc=3733,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 3733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3733)
        0.020103889 = weight(_text_:und in 3733) [ClassicSimilarity], result of:
          0.020103889 = score(doc=3733,freq=4.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.20778441 = fieldWeight in 3733, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3733)
      0.33333334 = coord(2/6)
    
    BK
    70.03 Methoden, Techniken und Organisation der sozialwissenschaftlichen Forschung
    Classification
    70.03 Methoden, Techniken und Organisation der sozialwissenschaftlichen Forschung
    Footnote
    Rez. in: College & research libraries 52(1991) S.382-383 (P. Wilson); Knowledge organization 20(1993) no.1, S.61-62 (M. Giesecke)
  16. Batorowska, H.; Kaminska-Czubala, B.: Information retrieval support : visualisation of the information space of a document (2014) 0.01
    0.0082332585 = product of:
      0.024699774 = sum of:
        0.0128707085 = weight(_text_:in in 1444) [ClassicSimilarity], result of:
          0.0128707085 = score(doc=1444,freq=26.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2167489 = fieldWeight in 1444, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1444)
        0.011829065 = product of:
          0.02365813 = sum of:
            0.02365813 = weight(_text_:22 in 1444) [ClassicSimilarity], result of:
              0.02365813 = score(doc=1444,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.15476047 = fieldWeight in 1444, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1444)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Acquiring knowledge in any field involves information retrieval, i.e. searching the available documents to identify answers to the queries concerning the selected objects. Knowing the keywords which are names of the objects will enable situating the user's query in the information space organized as a thesaurus or faceted classification. Objectives: Identification the areas in the information space which correspond to gaps in the user's personal knowledge or in the domain knowledge might become useful in theory or practice. The aim of this paper is to present a realistic information-space model of a self-authored full-text document on information culture, indexed by the author of this article. Methodology: Having established the relations between the terms, particular modules (sets of terms connected by relations used in facet classification) are situated on a plain, similarly to a communication map. Conclusions drawn from the "journey" on the map, which is a visualization of the knowledge contained in the analysed document, are the crucial part of this paper. Results: The direct result of the research is the created model of information space visualization of a given document (book, article, website). The proposed procedure can practically be used as a new form of representation in order to map the contents of academic books and articles, beside the traditional index form, especially as an e-book auxiliary tool. In teaching, visualization of the information space of a document can be used to help students understand the issues of: classification, categorization and representation of new knowledge emerging in human mind.
    Series
    Advances in knowledge organization; vol. 14
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  17. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.01
    0.007512714 = product of:
      0.02253814 = sum of:
        0.010709076 = weight(_text_:in in 2659) [ClassicSimilarity], result of:
          0.010709076 = score(doc=2659,freq=18.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 2659, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2659)
        0.011829065 = product of:
          0.02365813 = sum of:
            0.02365813 = weight(_text_:22 in 2659) [ClassicSimilarity], result of:
              0.02365813 = score(doc=2659,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.15476047 = fieldWeight in 2659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2659)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The Library of Congress Subject Headings (LCSH) has been developed over the course of more than a century, predating the semantic web by some time. Until the 1986, the only concept-toconcept relationship available was an undifferentiated "See Also" reference, which was used for both associative (RT) and hierarchical (BT/NT) connections. In that year, in preparation for the first release of the headings in machine readable MARC Authorities form, an attempt was made to automatically convert these "See Also" links into the standardized thesaural relations. Unfortunately, the rule used to determine the type of reference to generate relied on the presence of symmetric links to detect associatively related terms; "See Also" references that were only present in one of the related terms were assumed to be hierarchical. This left the process vulnerable to inconsistent use of references in the pre-conversion data, with a marked bias towards promoting relationships to hierarchical status. The Library of Congress was aware that the results of the conversion contained many inconsistencies, and intended to validate and correct the results over the course of time. Unfortunately, twenty years later, less than 40% of the converted records have been evaluated. The converted records, being the earliest encountered during the Library's cataloging activities, represent the most basic concepts within LCSH; errors in the syndetic structure for these records affect far more subordinate concepts than those nearer the periphery. Worse, a policy of patterning new headings after pre-existing ones leads to structural errors arising from the conversion process being replicated in these newer headings, perpetuating and exacerbating the errors. As the LCSH prepares for its second great conversion, from MARC to SKOS, it is critical to address these structural problems. As part of the work on converting the headings into SKOS, I have experimented with different visualizations of the tangled web of broader terms embedded in LCSH. This poster illustrates several of these renderings, shows how they can help users to judge which relationships might not be correct, and shows just exactly how Doorbells and Mammals are related.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  18. Wu, I.-C.; Vakkari, P.: Effects of subject-oriented visualization tools on search by novices and intermediates (2018) 0.01
    0.007504981 = product of:
      0.022514943 = sum of:
        0.007728611 = weight(_text_:in in 4573) [ClassicSimilarity], result of:
          0.007728611 = score(doc=4573,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 4573, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4573)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 4573) [ClassicSimilarity], result of:
              0.029572664 = score(doc=4573,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 4573, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4573)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This study explores how user subject knowledge influences search task processes and outcomes, as well as how search behavior is influenced by subject-oriented information visualization (IV) tools. To enable integrated searches, the proposed WikiMap + integrates search functions and IV tools (i.e., a topic network and hierarchical topic tree) and gathers information from Wikipedia pages and Google Search results. To evaluate the effectiveness of the proposed interfaces, we design subject-oriented tasks and adopt extended evaluation measures. We recruited 48 novices and 48 knowledgeable users, that is, intermediates, for the evaluation. Our results show that novices using the proposed interface demonstrate better search performance than intermediates using Wikipedia. We therefore conclude that our tools help close the gap between novices and intermediates in information searches. The results also show that intermediates can take advantage of the search tool by leveraging the IV tools to browse subtopics, and formulate better queries with less effort. We conclude that embedding the IV and the search tools in the interface can result in different search behavior but improved task performance. We provide implications to design search systems to include IV features adapted to user levels of subject knowledge to help them achieve better task performance.
    Date
    9.12.2018 16:22:25
  19. Wu, K.-C.; Hsieh, T.-Y.: Affective choosing of clustering and categorization representations in e-book interfaces (2016) 0.01
    0.007032239 = product of:
      0.021096716 = sum of:
        0.006310384 = weight(_text_:in in 3070) [ClassicSimilarity], result of:
          0.006310384 = score(doc=3070,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 3070, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3070)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 3070) [ClassicSimilarity], result of:
              0.029572664 = score(doc=3070,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 3070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3070)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - The purpose of this paper is to investigate user experiences with a touch-wall interface featuring both clustering and categorization representations of available e-books in a public library to understand human information interactions under work-focused and recreational contexts. Design/methodology/approach - Researchers collected questionnaires from 251 New Taipei City Library visitors who used the touch-wall interface to search for new titles. The authors applied structural equation modelling to examine relationships among hedonic/utilitarian needs, clustering and categorization representations, perceived ease of use (EU) and the extent to which users experienced anxiety and uncertainty (AU) while interacting with the interface. Findings - Utilitarian users who have an explicit idea of what they intend to find tend to prefer the categorization interface. A hedonic-oriented user tends to prefer clustering interfaces. Users reported EU regardless of which interface they engaged with. Results revealed that use of the clustering interface had a negative correlation with AU. Users that seek to satisfy utilitarian needs tended to emphasize the importance of perceived EU, whilst pleasure-seeking users were a little more tolerant of anxiety or uncertainty. Originality/value - The Online Public Access Catalogue (OPAC) encourages library visitors to borrow digital books through the implementation of an information visualization system. This situation poses an opportunity to validate uses and gratification theory. People with hedonic/utilitarian needs displayed different risk-control attitudes and affected uncertainty using the interface. Knowledge about user interaction with such interfaces is vital when launching the development of a new OPAC.
    Date
    20. 1.2015 18:30:22
  20. Osinska, V.; Kowalska, M.; Osinski, Z.: ¬The role of visualization in the shaping and exploration of the individual information space : part 1 (2018) 0.01
    0.007032239 = product of:
      0.021096716 = sum of:
        0.006310384 = weight(_text_:in in 4641) [ClassicSimilarity], result of:
          0.006310384 = score(doc=4641,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 4641, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4641)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 4641) [ClassicSimilarity], result of:
              0.029572664 = score(doc=4641,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 4641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4641)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Studies on the state and structure of digital knowledge concerning science generally relate to macro and meso scales. Supported by visualizations, these studies can deliver knowledge about emerging scientific fields or collaboration between countries, scientific centers, or groups of researchers. Analyses of individual activities or single scientific career paths are rarely presented and discussed. The authors decided to fill this gap and developed a web application for visualizing the scientific output of particular researchers. This free software based on bibliographic data from local databases, provides six layouts for analysis. Researchers can see the dynamic characteristics of their own writing activity, the time and place of publication, and the thematic scope of research problems. They can also identify cooperation networks, and consequently, study the dependencies and regularities in their own scientific activity. The current article presents the results of a study of the application's usability and functionality as well as attempts to define different user groups. A survey about the interface was sent to select researchers employed at Nicolaus Copernicus University. The results were used to answer the question as to whether such a specialized visualization tool can significantly augment the individual information space of the contemporary researcher.
    Date
    21.12.2018 17:22:13

Years

Types

  • a 114
  • el 27
  • m 9
  • s 2
  • x 2
  • b 1
  • p 1
  • r 1
  • More… Less…

Subjects

Classifications