Search (114 results, page 1 of 6)

  • × language_ss:"e"
  • × theme_ss:"Visualisierung"
  • × type_ss:"a"
  1. Neubauer, G.: Visualization of typed links in linked data (2017) 0.03
    0.026655678 = product of:
      0.11328663 = sum of:
        0.016571296 = weight(_text_:und in 3912) [ClassicSimilarity], result of:
          0.016571296 = score(doc=3912,freq=8.0), product of:
            0.06767212 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.030532904 = queryNorm
            0.24487628 = fieldWeight in 3912, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.067504965 = weight(_text_:datenverarbeitung in 3912) [ClassicSimilarity], result of:
          0.067504965 = score(doc=3912,freq=2.0), product of:
            0.19315876 = queryWeight, product of:
              6.326249 = idf(docFreq=214, maxDocs=44218)
              0.030532904 = queryNorm
            0.34947917 = fieldWeight in 3912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.326249 = idf(docFreq=214, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.005405603 = weight(_text_:in in 3912) [ClassicSimilarity], result of:
          0.005405603 = score(doc=3912,freq=6.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.1301535 = fieldWeight in 3912, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.02380476 = weight(_text_:der in 3912) [ClassicSimilarity], result of:
          0.02380476 = score(doc=3912,freq=16.0), product of:
            0.06820339 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.030532904 = queryNorm
            0.34902605 = fieldWeight in 3912, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
      0.23529412 = coord(4/17)
    
    Abstract
    Das Themengebiet der Arbeit behandelt Visualisierungen von typisierten Links in Linked Data. Die wissenschaftlichen Gebiete, die im Allgemeinen den Inhalt des Beitrags abgrenzen, sind das Semantic Web, das Web of Data und Informationsvisualisierung. Das Semantic Web, das von Tim Berners Lee 2001 erfunden wurde, stellt eine Erweiterung zum World Wide Web (Web 2.0) dar. Aktuelle Forschungen beziehen sich auf die Verknüpfbarkeit von Informationen im World Wide Web. Um es zu ermöglichen, solche Verbindungen wahrnehmen und verarbeiten zu können sind Visualisierungen die wichtigsten Anforderungen als Hauptteil der Datenverarbeitung. Im Zusammenhang mit dem Sematic Web werden Repräsentationen von zusammenhängenden Informationen anhand von Graphen gehandhabt. Der Grund des Entstehens dieser Arbeit ist in erster Linie die Beschreibung der Gestaltung von Linked Data-Visualisierungskonzepten, deren Prinzipien im Rahmen einer theoretischen Annäherung eingeführt werden. Anhand des Kontexts führt eine schrittweise Erweiterung der Informationen mit dem Ziel, praktische Richtlinien anzubieten, zur Vernetzung dieser ausgearbeiteten Gestaltungsrichtlinien. Indem die Entwürfe zweier alternativer Visualisierungen einer standardisierten Webapplikation beschrieben werden, die Linked Data als Netzwerk visualisiert, konnte ein Test durchgeführt werden, der deren Kompatibilität zum Inhalt hatte. Der praktische Teil behandelt daher die Designphase, die Resultate, und zukünftige Anforderungen des Projektes, die durch die Testung ausgearbeitet wurden.
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 70(2017) H.2, S.179-199
  2. Bekavac, B.; Herget, J.; Hierl, S.; Öttl, S.: Visualisierungskomponenten bei webbasierten Suchmaschinen : Methoden, Kriterien und ein Marktüberblick (2007) 0.01
    0.012424947 = product of:
      0.07040803 = sum of:
        0.030690469 = weight(_text_:und in 399) [ClassicSimilarity], result of:
          0.030690469 = score(doc=399,freq=14.0), product of:
            0.06767212 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.030532904 = queryNorm
            0.4535172 = fieldWeight in 399, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
        0.004369296 = weight(_text_:in in 399) [ClassicSimilarity], result of:
          0.004369296 = score(doc=399,freq=2.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.10520181 = fieldWeight in 399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
        0.035348266 = weight(_text_:der in 399) [ClassicSimilarity], result of:
          0.035348266 = score(doc=399,freq=18.0), product of:
            0.06820339 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.030532904 = queryNorm
            0.5182773 = fieldWeight in 399, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
      0.1764706 = coord(3/17)
    
    Abstract
    Bei webbasierten Suchmaschinen werden zunehmend auch Systeme mit Visualisierungskomponenten für die Ergebnisrepräsentation angeboten. Die Ansätze der Visualisierungen unterscheiden sich hierbei in der Zielsetzung und Ausführung deutlich voneinander. Der folgende Beitrag beschreibt die verwendeten Visualisierungsmethoden, systematisiert diese anhand einer Klassifikation, stellt die führenden frei zugänglichen Systeme vor und vergleicht diese anhand der Kriterien aus der Systematisierung. Die typischen Problemfelder werden identifiziert und die wichtigsten Gemeinsamkeiten und Unterschiede der untersuchten Systeme herausgearbeitet. Die Vorstellung zweier innovativer Visualisierungskonzepte im Bereich der Relationenvisualisierung innerhalb von Treffermengen und der Visualisierung von Relationen bei der Suche nach Musik schließen den Beitrag ab.
    Source
    Information - Wissenschaft und Praxis. 58(2007) H.3, S.149-158
  3. Kocijan, K.: Visualizing natural language resources (2015) 0.01
    0.008787878 = product of:
      0.07469696 = sum of:
        0.06845511 = weight(_text_:informationswissenschaft in 2995) [ClassicSimilarity], result of:
          0.06845511 = score(doc=2995,freq=2.0), product of:
            0.13754173 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.030532904 = queryNorm
            0.49770427 = fieldWeight in 2995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.078125 = fieldNorm(doc=2995)
        0.0062418524 = weight(_text_:in in 2995) [ClassicSimilarity], result of:
          0.0062418524 = score(doc=2995,freq=2.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.15028831 = fieldWeight in 2995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2995)
      0.11764706 = coord(2/17)
    
    Series
    Schriften zur Informationswissenschaft; Bd.66
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  4. Christoforidis, A.; Heuwing, B.; Mandl, T.: Visualising topics in document collections : an analysis of the interpretation process of historians (2017) 0.01
    0.0070303013 = product of:
      0.05975756 = sum of:
        0.05476408 = weight(_text_:informationswissenschaft in 3555) [ClassicSimilarity], result of:
          0.05476408 = score(doc=3555,freq=2.0), product of:
            0.13754173 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.030532904 = queryNorm
            0.3981634 = fieldWeight in 3555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.0625 = fieldNorm(doc=3555)
        0.0049934816 = weight(_text_:in in 3555) [ClassicSimilarity], result of:
          0.0049934816 = score(doc=3555,freq=2.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.120230645 = fieldWeight in 3555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3555)
      0.11764706 = coord(2/17)
    
    Series
    Schriften zur Informationswissenschaft; Bd. 70
  5. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.00
    0.0034719114 = product of:
      0.029511247 = sum of:
        0.008827312 = weight(_text_:in in 2755) [ClassicSimilarity], result of:
          0.008827312 = score(doc=2755,freq=4.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.21253976 = fieldWeight in 2755, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2755)
        0.020683935 = product of:
          0.04136787 = sum of:
            0.04136787 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.04136787 = score(doc=2755,freq=2.0), product of:
                0.106921025 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030532904 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Date
    1. 2.2016 18:25:22
    Series
    Lecture notes in computer science ; 9398
  6. Pfeffer, M.; Eckert, K.; Stuckenschmidt, H.: Visual analysis of classification systems and library collections (2008) 0.00
    0.0025771766 = product of:
      0.021906001 = sum of:
        0.013257037 = weight(_text_:und in 317) [ClassicSimilarity], result of:
          0.013257037 = score(doc=317,freq=2.0), product of:
            0.06767212 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.030532904 = queryNorm
            0.19590102 = fieldWeight in 317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=317)
        0.008648965 = weight(_text_:in in 317) [ClassicSimilarity], result of:
          0.008648965 = score(doc=317,freq=6.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.2082456 = fieldWeight in 317, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=317)
      0.11764706 = coord(2/17)
    
    Abstract
    In this demonstration we present a visual analysis approach that addresses both developers and users of hierarchical classification systems. The approach supports an intuitive understanding of the structure and current use in relation to a specific collection. We will also demonstrate its application for the development and management of library collections.
    Series
    Lecture notes in computer science ; 5173
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  7. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.00
    0.0025392908 = product of:
      0.021583972 = sum of:
        0.009173612 = weight(_text_:in in 3693) [ClassicSimilarity], result of:
          0.009173612 = score(doc=3693,freq=12.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.22087781 = fieldWeight in 3693, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3693)
        0.01241036 = product of:
          0.02482072 = sum of:
            0.02482072 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.02482072 = score(doc=3693,freq=2.0), product of:
                0.106921025 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030532904 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    Generally, Computer Science (CS) classifications are inconsistent in taxonomy strategies. t is necessary to develop CS taxonomy research to combine its historical perspective, its current knowledge and its predicted future trends - including all breakthroughs in information and communication technology. In this paper we have analyzed the ACM Computing Classification System (CCS) by means of visualization maps. The important achievement of current work is an effective visualization of classified documents from the ACM Digital Library. From the technical point of view, the innovation lies in the parallel use of analysis units: (sub)classes and keywords as well as a spherical 3D information surface. We have compared both the thematic and semantic maps of classified documents and results presented in Table 1. Furthermore, the proposed new method is used for content-related evaluation of the original scheme. Summing up: we improved an original ACM classification in the Computer Science domain by means of visualization.
    Date
    22. 7.2010 19:36:46
  8. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.00
    0.0022552093 = product of:
      0.019169278 = sum of:
        0.008827312 = weight(_text_:in in 5272) [ClassicSimilarity], result of:
          0.008827312 = score(doc=5272,freq=16.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.21253976 = fieldWeight in 5272, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5272)
        0.010341967 = product of:
          0.020683935 = sum of:
            0.020683935 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
              0.020683935 = score(doc=5272,freq=2.0), product of:
                0.106921025 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030532904 = queryNorm
                0.19345059 = fieldWeight in 5272, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5272)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
    Date
    22. 7.2006 16:11:05
  9. Eckert, K.; Pfeffer, M.; Stuckenschmidt, H.: Assessing thesaurus-based annotations for semantic search applications (2008) 0.00
    0.0022550295 = product of:
      0.019167751 = sum of:
        0.011599908 = weight(_text_:und in 1528) [ClassicSimilarity], result of:
          0.011599908 = score(doc=1528,freq=2.0), product of:
            0.06767212 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.030532904 = queryNorm
            0.17141339 = fieldWeight in 1528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1528)
        0.007567844 = weight(_text_:in in 1528) [ClassicSimilarity], result of:
          0.007567844 = score(doc=1528,freq=6.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.1822149 = fieldWeight in 1528, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1528)
      0.11764706 = coord(2/17)
    
    Abstract
    Statistical methods for automated document indexing are becoming an alternative to the manual assignment of keywords. We argue that the quality of the thesaurus used as a basis for indexing in regard to its ability to adequately cover the contents to be indexed and as a basis for the specific indexing method used is of crucial importance in automatic indexing. We present an interactive tool for thesaurus evaluation that is based on a combination of statistical measures and appropriate visualisation techniques that supports the detection of potential problems in a thesaurus. We describe the methods used and show that the tool supports the detection and correction of errors, leading to a better indexing result.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  10. Kraker, P.; Kittel, C,; Enkhbayar, A.: Open Knowledge Maps : creating a visual interface to the world's scientific knowledge based on natural language processing (2016) 0.00
    0.002248987 = product of:
      0.01911639 = sum of:
        0.009942777 = weight(_text_:und in 3205) [ClassicSimilarity], result of:
          0.009942777 = score(doc=3205,freq=2.0), product of:
            0.06767212 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.030532904 = queryNorm
            0.14692576 = fieldWeight in 3205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
        0.009173612 = weight(_text_:in in 3205) [ClassicSimilarity], result of:
          0.009173612 = score(doc=3205,freq=12.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.22087781 = fieldWeight in 3205, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
      0.11764706 = coord(2/17)
    
    Abstract
    The goal of Open Knowledge Maps is to create a visual interface to the world's scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps.
    Content
    Beitrag in einem Themenschwerpunkt 'Computerlinguistik und Bibliotheken'. Vgl.: http://0277.ch/ojs/index.php/cdrs_0277/article/view/157/355.
  11. Batorowska, H.; Kaminska-Czubala, B.: Information retrieval support : visualisation of the information space of a document (2014) 0.00
    0.0020324355 = product of:
      0.017275702 = sum of:
        0.009002128 = weight(_text_:in in 1444) [ClassicSimilarity], result of:
          0.009002128 = score(doc=1444,freq=26.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.2167489 = fieldWeight in 1444, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1444)
        0.008273574 = product of:
          0.016547147 = sum of:
            0.016547147 = weight(_text_:22 in 1444) [ClassicSimilarity], result of:
              0.016547147 = score(doc=1444,freq=2.0), product of:
                0.106921025 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030532904 = queryNorm
                0.15476047 = fieldWeight in 1444, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1444)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    Acquiring knowledge in any field involves information retrieval, i.e. searching the available documents to identify answers to the queries concerning the selected objects. Knowing the keywords which are names of the objects will enable situating the user's query in the information space organized as a thesaurus or faceted classification. Objectives: Identification the areas in the information space which correspond to gaps in the user's personal knowledge or in the domain knowledge might become useful in theory or practice. The aim of this paper is to present a realistic information-space model of a self-authored full-text document on information culture, indexed by the author of this article. Methodology: Having established the relations between the terms, particular modules (sets of terms connected by relations used in facet classification) are situated on a plain, similarly to a communication map. Conclusions drawn from the "journey" on the map, which is a visualization of the knowledge contained in the analysed document, are the crucial part of this paper. Results: The direct result of the research is the created model of information space visualization of a given document (book, article, website). The proposed procedure can practically be used as a new form of representation in order to map the contents of academic books and articles, beside the traditional index form, especially as an e-book auxiliary tool. In teaching, visualization of the information space of a document can be used to help students understand the issues of: classification, categorization and representation of new knowledge emerging in human mind.
    Series
    Advances in knowledge organization; vol. 14
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  12. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.00
    0.0018545643 = product of:
      0.015763797 = sum of:
        0.0074902223 = weight(_text_:in in 2659) [ClassicSimilarity], result of:
          0.0074902223 = score(doc=2659,freq=18.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.18034597 = fieldWeight in 2659, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2659)
        0.008273574 = product of:
          0.016547147 = sum of:
            0.016547147 = weight(_text_:22 in 2659) [ClassicSimilarity], result of:
              0.016547147 = score(doc=2659,freq=2.0), product of:
                0.106921025 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030532904 = queryNorm
                0.15476047 = fieldWeight in 2659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2659)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    The Library of Congress Subject Headings (LCSH) has been developed over the course of more than a century, predating the semantic web by some time. Until the 1986, the only concept-toconcept relationship available was an undifferentiated "See Also" reference, which was used for both associative (RT) and hierarchical (BT/NT) connections. In that year, in preparation for the first release of the headings in machine readable MARC Authorities form, an attempt was made to automatically convert these "See Also" links into the standardized thesaural relations. Unfortunately, the rule used to determine the type of reference to generate relied on the presence of symmetric links to detect associatively related terms; "See Also" references that were only present in one of the related terms were assumed to be hierarchical. This left the process vulnerable to inconsistent use of references in the pre-conversion data, with a marked bias towards promoting relationships to hierarchical status. The Library of Congress was aware that the results of the conversion contained many inconsistencies, and intended to validate and correct the results over the course of time. Unfortunately, twenty years later, less than 40% of the converted records have been evaluated. The converted records, being the earliest encountered during the Library's cataloging activities, represent the most basic concepts within LCSH; errors in the syndetic structure for these records affect far more subordinate concepts than those nearer the periphery. Worse, a policy of patterning new headings after pre-existing ones leads to structural errors arising from the conversion process being replicated in these newer headings, perpetuating and exacerbating the errors. As the LCSH prepares for its second great conversion, from MARC to SKOS, it is critical to address these structural problems. As part of the work on converting the headings into SKOS, I have experimented with different visualizations of the tangled web of broader terms embedded in LCSH. This poster illustrates several of these renderings, shows how they can help users to judge which relationships might not be correct, and shows just exactly how Doorbells and Mammals are related.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  13. Wu, I.-C.; Vakkari, P.: Effects of subject-oriented visualization tools on search by novices and intermediates (2018) 0.00
    0.0018526552 = product of:
      0.01574757 = sum of:
        0.005405603 = weight(_text_:in in 4573) [ClassicSimilarity], result of:
          0.005405603 = score(doc=4573,freq=6.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.1301535 = fieldWeight in 4573, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4573)
        0.010341967 = product of:
          0.020683935 = sum of:
            0.020683935 = weight(_text_:22 in 4573) [ClassicSimilarity], result of:
              0.020683935 = score(doc=4573,freq=2.0), product of:
                0.106921025 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030532904 = queryNorm
                0.19345059 = fieldWeight in 4573, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4573)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    This study explores how user subject knowledge influences search task processes and outcomes, as well as how search behavior is influenced by subject-oriented information visualization (IV) tools. To enable integrated searches, the proposed WikiMap + integrates search functions and IV tools (i.e., a topic network and hierarchical topic tree) and gathers information from Wikipedia pages and Google Search results. To evaluate the effectiveness of the proposed interfaces, we design subject-oriented tasks and adopt extended evaluation measures. We recruited 48 novices and 48 knowledgeable users, that is, intermediates, for the evaluation. Our results show that novices using the proposed interface demonstrate better search performance than intermediates using Wikipedia. We therefore conclude that our tools help close the gap between novices and intermediates in information searches. The results also show that intermediates can take advantage of the search tool by leveraging the IV tools to browse subtopics, and formulate better queries with less effort. We conclude that embedding the IV and the search tools in the interface can result in different search behavior but improved task performance. We provide implications to design search systems to include IV features adapted to user levels of subject knowledge to help them achieve better task performance.
    Date
    9.12.2018 16:22:25
  14. Wu, K.-C.; Hsieh, T.-Y.: Affective choosing of clustering and categorization representations in e-book interfaces (2016) 0.00
    0.0017359557 = product of:
      0.014755623 = sum of:
        0.004413656 = weight(_text_:in in 3070) [ClassicSimilarity], result of:
          0.004413656 = score(doc=3070,freq=4.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.10626988 = fieldWeight in 3070, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3070)
        0.010341967 = product of:
          0.020683935 = sum of:
            0.020683935 = weight(_text_:22 in 3070) [ClassicSimilarity], result of:
              0.020683935 = score(doc=3070,freq=2.0), product of:
                0.106921025 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030532904 = queryNorm
                0.19345059 = fieldWeight in 3070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3070)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    Purpose - The purpose of this paper is to investigate user experiences with a touch-wall interface featuring both clustering and categorization representations of available e-books in a public library to understand human information interactions under work-focused and recreational contexts. Design/methodology/approach - Researchers collected questionnaires from 251 New Taipei City Library visitors who used the touch-wall interface to search for new titles. The authors applied structural equation modelling to examine relationships among hedonic/utilitarian needs, clustering and categorization representations, perceived ease of use (EU) and the extent to which users experienced anxiety and uncertainty (AU) while interacting with the interface. Findings - Utilitarian users who have an explicit idea of what they intend to find tend to prefer the categorization interface. A hedonic-oriented user tends to prefer clustering interfaces. Users reported EU regardless of which interface they engaged with. Results revealed that use of the clustering interface had a negative correlation with AU. Users that seek to satisfy utilitarian needs tended to emphasize the importance of perceived EU, whilst pleasure-seeking users were a little more tolerant of anxiety or uncertainty. Originality/value - The Online Public Access Catalogue (OPAC) encourages library visitors to borrow digital books through the implementation of an information visualization system. This situation poses an opportunity to validate uses and gratification theory. People with hedonic/utilitarian needs displayed different risk-control attitudes and affected uncertainty using the interface. Knowledge about user interaction with such interfaces is vital when launching the development of a new OPAC.
    Date
    20. 1.2015 18:30:22
  15. Osinska, V.; Kowalska, M.; Osinski, Z.: ¬The role of visualization in the shaping and exploration of the individual information space : part 1 (2018) 0.00
    0.0017359557 = product of:
      0.014755623 = sum of:
        0.004413656 = weight(_text_:in in 4641) [ClassicSimilarity], result of:
          0.004413656 = score(doc=4641,freq=4.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.10626988 = fieldWeight in 4641, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4641)
        0.010341967 = product of:
          0.020683935 = sum of:
            0.020683935 = weight(_text_:22 in 4641) [ClassicSimilarity], result of:
              0.020683935 = score(doc=4641,freq=2.0), product of:
                0.106921025 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.030532904 = queryNorm
                0.19345059 = fieldWeight in 4641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4641)
          0.5 = coord(1/2)
      0.11764706 = coord(2/17)
    
    Abstract
    Studies on the state and structure of digital knowledge concerning science generally relate to macro and meso scales. Supported by visualizations, these studies can deliver knowledge about emerging scientific fields or collaboration between countries, scientific centers, or groups of researchers. Analyses of individual activities or single scientific career paths are rarely presented and discussed. The authors decided to fill this gap and developed a web application for visualizing the scientific output of particular researchers. This free software based on bibliographic data from local databases, provides six layouts for analysis. Researchers can see the dynamic characteristics of their own writing activity, the time and place of publication, and the thematic scope of research problems. They can also identify cooperation networks, and consequently, study the dependencies and regularities in their own scientific activity. The current article presents the results of a study of the application's usability and functionality as well as attempts to define different user groups. A survey about the interface was sent to select researchers employed at Nicolaus Copernicus University. The results were used to answer the question as to whether such a specialized visualization tool can significantly augment the individual information space of the contemporary researcher.
    Date
    21.12.2018 17:22:13
  16. Linden, E.J. van der; Vliegen, R.; Wijk, J.J. van: Visual Universal Decimal Classification (2007) 0.00
    0.0016261009 = product of:
      0.013821857 = sum of:
        0.005405603 = weight(_text_:in in 548) [ClassicSimilarity], result of:
          0.005405603 = score(doc=548,freq=6.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.1301535 = fieldWeight in 548, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=548)
        0.008416254 = weight(_text_:der in 548) [ClassicSimilarity], result of:
          0.008416254 = score(doc=548,freq=2.0), product of:
            0.06820339 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.030532904 = queryNorm
            0.12339935 = fieldWeight in 548, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=548)
      0.11764706 = coord(2/17)
    
    Abstract
    UDC aims to be a consistent and complete classification system, that enables practitioners to classify documents swiftly and smoothly. The eventual goal of UDC is to enable the public at large to retrieve documents from large collections of documents that are classified with UDC. The large size of the UDC Master Reference File, MRF with over 66.000 records, makes it difficult to obtain an overview and to understand its structure. Moreover, finding the right classification in MRF turns out to be difficult in practice. Last but not least, retrieval of documents requires insight and understanding of the coding system. Visualization is an effective means to support the development of UDC as well as its use by practitioners. Moreover, visualization offers possibilities to use the classification without use of the coding system as such. MagnaView has developed an application which demonstrates the use of interactive visualization to face these challenges. In our presentation, we discuss these challenges, and we give a demonstration of the way the application helps face these. Examples of visualizations can be found below.
  17. Petrovich, E.: Science mapping and science maps (2021) 0.00
    0.0016229238 = product of:
      0.013794852 = sum of:
        0.00706185 = weight(_text_:in in 595) [ClassicSimilarity], result of:
          0.00706185 = score(doc=595,freq=16.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.17003182 = fieldWeight in 595, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=595)
        0.006733003 = weight(_text_:der in 595) [ClassicSimilarity], result of:
          0.006733003 = score(doc=595,freq=2.0), product of:
            0.06820339 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.030532904 = queryNorm
            0.09871948 = fieldWeight in 595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.03125 = fieldNorm(doc=595)
      0.11764706 = coord(2/17)
    
    Abstract
    Science maps are visual representations of the structure and dynamics of scholarly knowl­edge. They aim to show how fields, disciplines, journals, scientists, publications, and scientific terms relate to each other. Science mapping is the body of methods and techniques that have been developed for generating science maps. This entry is an introduction to science maps and science mapping. It focuses on the conceptual, theoretical, and methodological issues of science mapping, rather than on the mathematical formulation of science mapping techniques. After a brief history of science mapping, we describe the general procedure for building a science map, presenting the data sources and the methods to select, clean, and pre-process the data. Next, we examine in detail how the most common types of science maps, namely the citation-based and the term-based, are generated. Both are based on networks: the former on the network of publications connected by citations, the latter on the network of terms co-occurring in publications. We review the rationale behind these mapping approaches, as well as the techniques and methods to build the maps (from the extraction of the network to the visualization and enrichment of the map). We also present less-common types of science maps, including co-authorship networks, interlocking editorship networks, maps based on patents' data, and geographic maps of science. Moreover, we consider how time can be represented in science maps to investigate the dynamics of science. We also discuss some epistemological and sociological topics that can help in the interpretation, contextualization, and assessment of science maps. Then, we present some possible applications of science maps in science policy. In the conclusion, we point out why science mapping may be interesting for all the branches of meta-science, from knowl­edge organization to epistemology.
    Footnote
    Beitrag in einem Special issue on 'Science and knowledge organization' mit längeren Überblicken zu wichtigen Begriffen der Wissensorgansiation.
    Series
    Reviews of concepts in knowledge organziation
  18. Wen, B.; Horlings, E.; Zouwen, M. van der; Besselaar, P. van den: Mapping science through bibliometric triangulation : an experimental approach applied to water research (2017) 0.00
    0.0013573153 = product of:
      0.01153718 = sum of:
        0.0031209262 = weight(_text_:in in 3437) [ClassicSimilarity], result of:
          0.0031209262 = score(doc=3437,freq=2.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.07514416 = fieldWeight in 3437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3437)
        0.008416254 = weight(_text_:der in 3437) [ClassicSimilarity], result of:
          0.008416254 = score(doc=3437,freq=2.0), product of:
            0.06820339 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.030532904 = queryNorm
            0.12339935 = fieldWeight in 3437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3437)
      0.11764706 = coord(2/17)
    
    Abstract
    The idea of constructing science maps based on bibliographic data has intrigued researchers for decades, and various techniques have been developed to map the structure of research disciplines. Most science mapping studies use a single method. However, as research fields have various properties, a valid map of a field should actually be composed of a set of maps derived from a series of investigations using different methods. That leads to the question of what can be learned from a combination-triangulation-of these different science maps. In this paper we propose a method for triangulation, using the example of water science. We combine three different mapping approaches: journal-journal citation relations (JJCR), shared author keywords (SAK), and title word-cited reference co-occurrence (TWRC). Our results demonstrate that triangulation of JJCR, SAK, and TWRC produces a more comprehensive picture than each method applied individually. The outcomes from the three different approaches can be associated with each other and systematically interpreted to provide insights into the complex multidisciplinary structure of the field of water research.
  19. Choi, I.: Visualizations of cross-cultural bibliographic classification : comparative studies of the Korean Decimal Classification and the Dewey Decimal Classification (2017) 0.00
    6.86908E-4 = product of:
      0.0116774365 = sum of:
        0.0116774365 = weight(_text_:in in 3869) [ClassicSimilarity], result of:
          0.0116774365 = score(doc=3869,freq=28.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.2811637 = fieldWeight in 3869, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3869)
      0.05882353 = coord(1/17)
    
    Abstract
    The changes in KO systems induced by sociocultural influences may include those in both classificatory principles and cultural features. The proposed study will examine the Korean Decimal Classification (KDC)'s adaptation of the Dewey Decimal Classification (DDC) by comparing the two systems. This case manifests the sociocultural influences on KOSs in a cross-cultural context. Therefore, the study aims at an in-depth investigation of sociocultural influences by situating a KOS in a cross-cultural environment and examining the dynamics between two classification systems designed to organize information resources in two distinct sociocultural contexts. As a preceding stage of the comparison, the analysis was conducted on the changes that result from the meeting of different sociocultural feature in a descriptive method. The analysis aims to identify variations between the two schemes in comparison of the knowledge structures of the two classifications, in terms of the quantity of class numbers that represent concepts and their relationships in each of the individual main classes. The most effective analytic strategy to show the patterns of the comparison was visualizations of similarities and differences between the two systems. Increasing or decreasing tendencies in the class through various editions were analyzed. Comparing the compositions of the main classes and distributions of concepts in the KDC and DDC discloses the differences in their knowledge structures empirically. This phase of quantitative analysis and visualizing techniques generates empirical evidence leading to interpretation.
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  20. Maaten, L. van den: Learning a parametric embedding by preserving local structure (2009) 0.00
    6.800043E-4 = product of:
      0.011560073 = sum of:
        0.011560073 = weight(_text_:in in 3883) [ClassicSimilarity], result of:
          0.011560073 = score(doc=3883,freq=14.0), product of:
            0.04153252 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.030532904 = queryNorm
            0.27833787 = fieldWeight in 3883, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3883)
      0.05882353 = coord(1/17)
    
    Abstract
    The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.

Years

Types

  • el 20
  • b 1
  • More… Less…