Search (115 results, page 1 of 6)

  • × language_ss:"e"
  • × theme_ss:"Visualisierung"
  • × type_ss:"a"
  1. Bekavac, B.; Herget, J.; Hierl, S.; Öttl, S.: Visualisierungskomponenten bei webbasierten Suchmaschinen : Methoden, Kriterien und ein Marktüberblick (2007) 0.00
    0.0031662963 = product of:
      0.028496666 = sum of:
        0.0031348949 = weight(_text_:in in 399) [ClassicSimilarity], result of:
          0.0031348949 = score(doc=399,freq=2.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.10520181 = fieldWeight in 399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
        0.02536177 = weight(_text_:der in 399) [ClassicSimilarity], result of:
          0.02536177 = score(doc=399,freq=18.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.5182773 = fieldWeight in 399, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
      0.11111111 = coord(2/18)
    
    Abstract
    Bei webbasierten Suchmaschinen werden zunehmend auch Systeme mit Visualisierungskomponenten für die Ergebnisrepräsentation angeboten. Die Ansätze der Visualisierungen unterscheiden sich hierbei in der Zielsetzung und Ausführung deutlich voneinander. Der folgende Beitrag beschreibt die verwendeten Visualisierungsmethoden, systematisiert diese anhand einer Klassifikation, stellt die führenden frei zugänglichen Systeme vor und vergleicht diese anhand der Kriterien aus der Systematisierung. Die typischen Problemfelder werden identifiziert und die wichtigsten Gemeinsamkeiten und Unterschiede der untersuchten Systeme herausgearbeitet. Die Vorstellung zweier innovativer Visualisierungskonzepte im Bereich der Relationenvisualisierung innerhalb von Treffermengen und der Visualisierung von Relationen bei der Suche nach Musik schließen den Beitrag ab.
  2. Linden, E.J. van der; Vliegen, R.; Wijk, J.J. van: Visual Universal Decimal Classification (2007) 0.00
    0.0024847728 = product of:
      0.014908636 = sum of:
        0.003878427 = weight(_text_:in in 548) [ClassicSimilarity], result of:
          0.003878427 = score(doc=548,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1301535 = fieldWeight in 548, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=548)
        0.0060385168 = weight(_text_:der in 548) [ClassicSimilarity], result of:
          0.0060385168 = score(doc=548,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.12339935 = fieldWeight in 548, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=548)
        0.0049916925 = product of:
          0.0149750775 = sum of:
            0.0149750775 = weight(_text_:29 in 548) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=548,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=548)
          0.33333334 = coord(1/3)
      0.16666667 = coord(3/18)
    
    Abstract
    UDC aims to be a consistent and complete classification system, that enables practitioners to classify documents swiftly and smoothly. The eventual goal of UDC is to enable the public at large to retrieve documents from large collections of documents that are classified with UDC. The large size of the UDC Master Reference File, MRF with over 66.000 records, makes it difficult to obtain an overview and to understand its structure. Moreover, finding the right classification in MRF turns out to be difficult in practice. Last but not least, retrieval of documents requires insight and understanding of the coding system. Visualization is an effective means to support the development of UDC as well as its use by practitioners. Moreover, visualization offers possibilities to use the classification without use of the coding system as such. MagnaView has developed an application which demonstrates the use of interactive visualization to face these challenges. In our presentation, we discuss these challenges, and we give a demonstration of the way the application helps face these. Examples of visualizations can be found below.
    Source
    Extensions and corrections to the UDC. 29(2007), S.297-300
  3. Neubauer, G.: Visualization of typed links in linked data (2017) 0.00
    0.0023286592 = product of:
      0.020957932 = sum of:
        0.003878427 = weight(_text_:in in 3912) [ClassicSimilarity], result of:
          0.003878427 = score(doc=3912,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1301535 = fieldWeight in 3912, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.017079504 = weight(_text_:der in 3912) [ClassicSimilarity], result of:
          0.017079504 = score(doc=3912,freq=16.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.34902605 = fieldWeight in 3912, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
      0.11111111 = coord(2/18)
    
    Abstract
    Das Themengebiet der Arbeit behandelt Visualisierungen von typisierten Links in Linked Data. Die wissenschaftlichen Gebiete, die im Allgemeinen den Inhalt des Beitrags abgrenzen, sind das Semantic Web, das Web of Data und Informationsvisualisierung. Das Semantic Web, das von Tim Berners Lee 2001 erfunden wurde, stellt eine Erweiterung zum World Wide Web (Web 2.0) dar. Aktuelle Forschungen beziehen sich auf die Verknüpfbarkeit von Informationen im World Wide Web. Um es zu ermöglichen, solche Verbindungen wahrnehmen und verarbeiten zu können sind Visualisierungen die wichtigsten Anforderungen als Hauptteil der Datenverarbeitung. Im Zusammenhang mit dem Sematic Web werden Repräsentationen von zusammenhängenden Informationen anhand von Graphen gehandhabt. Der Grund des Entstehens dieser Arbeit ist in erster Linie die Beschreibung der Gestaltung von Linked Data-Visualisierungskonzepten, deren Prinzipien im Rahmen einer theoretischen Annäherung eingeführt werden. Anhand des Kontexts führt eine schrittweise Erweiterung der Informationen mit dem Ziel, praktische Richtlinien anzubieten, zur Vernetzung dieser ausgearbeiteten Gestaltungsrichtlinien. Indem die Entwürfe zweier alternativer Visualisierungen einer standardisierten Webapplikation beschrieben werden, die Linked Data als Netzwerk visualisiert, konnte ein Test durchgeführt werden, der deren Kompatibilität zum Inhalt hatte. Der praktische Teil behandelt daher die Designphase, die Resultate, und zukünftige Anforderungen des Projektes, die durch die Testung ausgearbeitet wurden.
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 70(2017) H.2, S.179-199
  4. Petrovich, E.: Science mapping and science maps (2021) 0.00
    0.0023151538 = product of:
      0.013890923 = sum of:
        0.0050667557 = weight(_text_:in in 595) [ClassicSimilarity], result of:
          0.0050667557 = score(doc=595,freq=16.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.17003182 = fieldWeight in 595, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=595)
        0.0048308135 = weight(_text_:der in 595) [ClassicSimilarity], result of:
          0.0048308135 = score(doc=595,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.09871948 = fieldWeight in 595, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.03125 = fieldNorm(doc=595)
        0.003993354 = product of:
          0.011980061 = sum of:
            0.011980061 = weight(_text_:29 in 595) [ClassicSimilarity], result of:
              0.011980061 = score(doc=595,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.15546128 = fieldWeight in 595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=595)
          0.33333334 = coord(1/3)
      0.16666667 = coord(3/18)
    
    Abstract
    Science maps are visual representations of the structure and dynamics of scholarly knowl­edge. They aim to show how fields, disciplines, journals, scientists, publications, and scientific terms relate to each other. Science mapping is the body of methods and techniques that have been developed for generating science maps. This entry is an introduction to science maps and science mapping. It focuses on the conceptual, theoretical, and methodological issues of science mapping, rather than on the mathematical formulation of science mapping techniques. After a brief history of science mapping, we describe the general procedure for building a science map, presenting the data sources and the methods to select, clean, and pre-process the data. Next, we examine in detail how the most common types of science maps, namely the citation-based and the term-based, are generated. Both are based on networks: the former on the network of publications connected by citations, the latter on the network of terms co-occurring in publications. We review the rationale behind these mapping approaches, as well as the techniques and methods to build the maps (from the extraction of the network to the visualization and enrichment of the map). We also present less-common types of science maps, including co-authorship networks, interlocking editorship networks, maps based on patents' data, and geographic maps of science. Moreover, we consider how time can be represented in science maps to investigate the dynamics of science. We also discuss some epistemological and sociological topics that can help in the interpretation, contextualization, and assessment of science maps. Then, we present some possible applications of science maps in science policy. In the conclusion, we point out why science mapping may be interesting for all the branches of meta-science, from knowl­edge organization to epistemology.
    Date
    27. 5.2022 18:19:29
    Footnote
    Beitrag in einem Special issue on 'Science and knowledge organization' mit längeren Überblicken zu wichtigen Begriffen der Wissensorgansiation.
    Series
    Reviews of concepts in knowledge organziation
  5. Wen, B.; Horlings, E.; Zouwen, M. van der; Besselaar, P. van den: Mapping science through bibliometric triangulation : an experimental approach applied to water research (2017) 0.00
    0.00221157 = product of:
      0.01326942 = sum of:
        0.0022392108 = weight(_text_:in in 3437) [ClassicSimilarity], result of:
          0.0022392108 = score(doc=3437,freq=2.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.07514416 = fieldWeight in 3437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3437)
        0.0060385168 = weight(_text_:der in 3437) [ClassicSimilarity], result of:
          0.0060385168 = score(doc=3437,freq=2.0), product of:
            0.048934754 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.021906832 = queryNorm
            0.12339935 = fieldWeight in 3437, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3437)
        0.0049916925 = product of:
          0.0149750775 = sum of:
            0.0149750775 = weight(_text_:29 in 3437) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=3437,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 3437, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3437)
          0.33333334 = coord(1/3)
      0.16666667 = coord(3/18)
    
    Abstract
    The idea of constructing science maps based on bibliographic data has intrigued researchers for decades, and various techniques have been developed to map the structure of research disciplines. Most science mapping studies use a single method. However, as research fields have various properties, a valid map of a field should actually be composed of a set of maps derived from a series of investigations using different methods. That leads to the question of what can be learned from a combination-triangulation-of these different science maps. In this paper we propose a method for triangulation, using the example of water science. We combine three different mapping approaches: journal-journal citation relations (JJCR), shared author keywords (SAK), and title word-cited reference co-occurrence (TWRC). Our results demonstrate that triangulation of JJCR, SAK, and TWRC produces a more comprehensive picture than each method applied individually. The outcomes from the three different approaches can be associated with each other and systematically interpreted to provide insights into the complex multidisciplinary structure of the field of water research.
    Date
    16.11.2017 13:29:12
  6. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.00
    0.0018030024 = product of:
      0.016227022 = sum of:
        0.0063334443 = weight(_text_:in in 2755) [ClassicSimilarity], result of:
          0.0063334443 = score(doc=2755,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.21253976 = fieldWeight in 2755, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2755)
        0.0098935785 = product of:
          0.029680735 = sum of:
            0.029680735 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.029680735 = score(doc=2755,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Date
    1. 2.2016 18:25:22
    Series
    Lecture notes in computer science ; 9398
  7. Yukimo Kobashio, N.; Santos, R.N.M.: Information organization and representation by graphic devices : an interdisciplinary approach (2007) 0.00
    0.0016068673 = product of:
      0.014461806 = sum of:
        0.0044784215 = weight(_text_:in in 1101) [ClassicSimilarity], result of:
          0.0044784215 = score(doc=1101,freq=2.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.15028831 = fieldWeight in 1101, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=1101)
        0.009983385 = product of:
          0.029950155 = sum of:
            0.029950155 = weight(_text_:29 in 1101) [ClassicSimilarity], result of:
              0.029950155 = score(doc=1101,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.38865322 = fieldWeight in 1101, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1101)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Date
    29.12.2007 18:17:29
    Source
    ¬La interdisciplinariedad y la transdisciplinariedad en la organización del conocimiento científico : actas del VIII Congreso ISKO-España, León, 18, 19 y 20 de Abril de 2007 : Interdisciplinarity and transdisciplinarity in the organization of scientific knowledge. Ed.: B. Rodriguez Bravo u. M.L Alvite Diez
  8. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.00
    0.0013908951 = product of:
      0.012518056 = sum of:
        0.0065819086 = weight(_text_:in in 3693) [ClassicSimilarity], result of:
          0.0065819086 = score(doc=3693,freq=12.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.22087781 = fieldWeight in 3693, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3693)
        0.0059361467 = product of:
          0.01780844 = sum of:
            0.01780844 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.01780844 = score(doc=3693,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Generally, Computer Science (CS) classifications are inconsistent in taxonomy strategies. t is necessary to develop CS taxonomy research to combine its historical perspective, its current knowledge and its predicted future trends - including all breakthroughs in information and communication technology. In this paper we have analyzed the ACM Computing Classification System (CCS) by means of visualization maps. The important achievement of current work is an effective visualization of classified documents from the ACM Digital Library. From the technical point of view, the innovation lies in the parallel use of analysis units: (sub)classes and keywords as well as a spherical 3D information surface. We have compared both the thematic and semantic maps of classified documents and results presented in Table 1. Furthermore, the proposed new method is used for content-related evaluation of the original scheme. Summing up: we improved an original ACM classification in the Computer Science domain by means of visualization.
    Date
    22. 7.2010 19:36:46
  9. Eckert, K.; Pfeffer, M.; Stuckenschmidt, H.: Assessing thesaurus-based annotations for semantic search applications (2008) 0.00
    0.0013797963 = product of:
      0.012418167 = sum of:
        0.005429798 = weight(_text_:in in 1528) [ClassicSimilarity], result of:
          0.005429798 = score(doc=1528,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1822149 = fieldWeight in 1528, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1528)
        0.006988369 = product of:
          0.020965107 = sum of:
            0.020965107 = weight(_text_:29 in 1528) [ClassicSimilarity], result of:
              0.020965107 = score(doc=1528,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.27205724 = fieldWeight in 1528, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1528)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Statistical methods for automated document indexing are becoming an alternative to the manual assignment of keywords. We argue that the quality of the thesaurus used as a basis for indexing in regard to its ability to adequately cover the contents to be indexed and as a basis for the specific indexing method used is of crucial importance in automatic indexing. We present an interactive tool for thesaurus evaluation that is based on a combination of statistical measures and appropriate visualisation techniques that supports the detection of potential problems in a thesaurus. We describe the methods used and show that the tool supports the detection and correction of errors, leading to a better indexing result.
    Date
    25. 2.2012 13:51:29
  10. Trentin, G.: Graphic tools for knowledge representation and informal problem-based learning in professional online communities (2007) 0.00
    0.001301036 = product of:
      0.011709324 = sum of:
        0.0067176316 = weight(_text_:in in 1463) [ClassicSimilarity], result of:
          0.0067176316 = score(doc=1463,freq=18.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.22543246 = fieldWeight in 1463, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1463)
        0.0049916925 = product of:
          0.0149750775 = sum of:
            0.0149750775 = weight(_text_:29 in 1463) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=1463,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1463)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    The use of graphical representations is very common in information technology and engineering. Although these same tools could be applied effectively in other areas, they are not used because they are hardly known or are completely unheard of. This article aims to discuss the results of the experimentation carried out on graphical approaches to knowledge representation during research, analysis and problem-solving in the health care sector. The experimentation was carried out on conceptual mapping and Petri Nets, developed collaboratively online with the aid of the CMapTool and WoPeD graphic applications. Two distinct professional communities have been involved in the research, both pertaining to the Local Health Units in Tuscany. One community is made up of head physicians and health care managers whilst the other is formed by technical staff from the Department of Nutrition and Food Hygiene. It emerged from the experimentation that concept maps arc considered more effective in analyzing knowledge domain related to the problem to be faced (description of what it is). On the other hand, Petri Nets arc more effective in studying and formalizing its possible solutions (description of what to do to). For the same reason, those involved in the experimentation have proposed the complementary rather than alternative use of the two knowledge representation methods as a support for professional problem-solving.
    Date
    28. 2.2008 14:16:29
  11. Vizine-Goetz, D.: DeweyBrowser (2006) 0.00
    0.0012690867 = product of:
      0.01142178 = sum of:
        0.004433411 = weight(_text_:in in 5774) [ClassicSimilarity], result of:
          0.004433411 = score(doc=5774,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.14877784 = fieldWeight in 5774, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5774)
        0.006988369 = product of:
          0.020965107 = sum of:
            0.020965107 = weight(_text_:29 in 5774) [ClassicSimilarity], result of:
              0.020965107 = score(doc=5774,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.27205724 = fieldWeight in 5774, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5774)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Date
    28. 9.2008 19:16:29
    Footnote
    Beitrag in einem Themenheft "Moving beyond the presentation layer: content and context in the Dewey Decimal Classification (DDC) System"
  12. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.00
    0.0012533593 = product of:
      0.011280233 = sum of:
        0.0063334443 = weight(_text_:in in 5272) [ClassicSimilarity], result of:
          0.0063334443 = score(doc=5272,freq=16.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.21253976 = fieldWeight in 5272, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5272)
        0.0049467892 = product of:
          0.014840367 = sum of:
            0.014840367 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
              0.014840367 = score(doc=5272,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19345059 = fieldWeight in 5272, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5272)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
    Date
    22. 7.2006 16:11:05
  13. Shiri, A.; Molberg, K.: Interfaces to knowledge organization systems in Canadian digital library collections (2005) 0.00
    0.0011640685 = product of:
      0.010476616 = sum of:
        0.0054849237 = weight(_text_:in in 2559) [ClassicSimilarity], result of:
          0.0054849237 = score(doc=2559,freq=12.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.18406484 = fieldWeight in 2559, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2559)
        0.0049916925 = product of:
          0.0149750775 = sum of:
            0.0149750775 = weight(_text_:29 in 2559) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=2559,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 2559, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2559)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Purpose - The purpose of this paper is to report an investigation into the ways in which Canadian digital library collections have incorporated knowledge organization systems into their search interfaces. Design/methodology/approach - A combination of data-gathering techniques was used. These were as follows: a review of the literature related to the application of knowledge organization systems, deep scanning of Canadian governmental and academic institutions web sites on the web, identify and contact researchers in the area of knowledge organization, and identify and contact people in the governmental organizations who are involved in knowledge organization and information management. Findings - A total of 33 digital collections were identified that have made use of some type of knowledge organization system. Thesauri, subject heading lists and classification schemes were the widely used knowledge organization systems in the surveyed Canadian digital library collections. Research limitations/implications - The target population for this research was limited to governmental and academic digital library collections. Practical implications - An evaluation of the knowledge organization systems interfaces showed that searching, browsing and navigation facilities as well as bilingual features call for improvements. Originality/value - This research contributes to the following areas: digital libraries, knowledge organization systems and services and search interface design.
    Source
    Online information review. 29(2005) no.6, S.604-620
  14. Batorowska, H.; Kaminska-Czubala, B.: Information retrieval support : visualisation of the information space of a document (2014) 0.00
    0.001157367 = product of:
      0.010416303 = sum of:
        0.0064588715 = weight(_text_:in in 1444) [ClassicSimilarity], result of:
          0.0064588715 = score(doc=1444,freq=26.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.2167489 = fieldWeight in 1444, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1444)
        0.0039574313 = product of:
          0.011872293 = sum of:
            0.011872293 = weight(_text_:22 in 1444) [ClassicSimilarity], result of:
              0.011872293 = score(doc=1444,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.15476047 = fieldWeight in 1444, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1444)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Acquiring knowledge in any field involves information retrieval, i.e. searching the available documents to identify answers to the queries concerning the selected objects. Knowing the keywords which are names of the objects will enable situating the user's query in the information space organized as a thesaurus or faceted classification. Objectives: Identification the areas in the information space which correspond to gaps in the user's personal knowledge or in the domain knowledge might become useful in theory or practice. The aim of this paper is to present a realistic information-space model of a self-authored full-text document on information culture, indexed by the author of this article. Methodology: Having established the relations between the terms, particular modules (sets of terms connected by relations used in facet classification) are situated on a plain, similarly to a communication map. Conclusions drawn from the "journey" on the map, which is a visualization of the knowledge contained in the analysed document, are the crucial part of this paper. Results: The direct result of the research is the created model of information space visualization of a given document (book, article, website). The proposed procedure can practically be used as a new form of representation in order to map the contents of academic books and articles, beside the traditional index form, especially as an e-book auxiliary tool. In teaching, visualization of the information space of a document can be used to help students understand the issues of: classification, categorization and representation of new knowledge emerging in human mind.
    Series
    Advances in knowledge organization; vol. 14
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  15. Osiñska, V.: Visual analysis of classification scheme (2010) 0.00
    0.0011109689 = product of:
      0.00999872 = sum of:
        0.0050070276 = weight(_text_:in in 4068) [ClassicSimilarity], result of:
          0.0050070276 = score(doc=4068,freq=10.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.16802745 = fieldWeight in 4068, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4068)
        0.0049916925 = product of:
          0.0149750775 = sum of:
            0.0149750775 = weight(_text_:29 in 4068) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=4068,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 4068, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4068)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    This paper proposes a novel methodology to visualize a classification scheme. It is demonstrated with the Association for Computing Machinery (ACM) Computing Classification System (CCS). The collection derived from the ACM digital library, containing 37,543 documents classified by CCS. The assigned classes, subject descriptors, and keywords were processed in a dataset to produce a graphical representation of the documents. The general conception is based on the similarity of co-classes (themes) proportional to the number of common publications. The final number of all possible classes and subclasses in the collection was 353 and therefore the similarity matrix of co-classes had the same dimension. A spherical surface was chosen as the target information space. Classes and documents' node locations on the sphere were obtained by means of Multidimensional Scaling coordinates. By representing the surface on a plane like a map projection, it is possible to analyze the visualization layout. The graphical patterns were organized in some colour clusters. For evaluation of given visualization maps, graphics filtering was applied. This proposed method can be very useful in interdisciplinary research fields. It allows for a great amount of heterogeneous information to be conveyed in a compact display, including topics, relationships among topics, frequency of occurrence, importance and changes of these properties over time.
    Date
    6. 1.2011 19:29:12
  16. Soylu, A.; Giese, M.; Jimenez-Ruiz, E.; Kharlamov, E.; Zheleznyakov, D.; Horrocks, I.: Towards exploiting query history for adaptive ontology-based visual query formulation (2014) 0.00
    0.0010877887 = product of:
      0.009790098 = sum of:
        0.003800067 = weight(_text_:in in 1576) [ClassicSimilarity], result of:
          0.003800067 = score(doc=1576,freq=4.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.12752387 = fieldWeight in 1576, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1576)
        0.005990031 = product of:
          0.017970093 = sum of:
            0.017970093 = weight(_text_:29 in 1576) [ClassicSimilarity], result of:
              0.017970093 = score(doc=1576,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.23319192 = fieldWeight in 1576, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1576)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Grounded on real industrial use cases, we recently proposed an ontology-based visual query system for SPARQL, named OptiqueVQS. Ontology-based visual query systems employ ontologies and visual representations to depict the domain of interest and queries, and are promising to enable end users without any technical background to access data on their own. However, even with considerably small ontologies, the number of ontology elements to choose from increases drastically, and hence hinders usability. Therefore, in this paper, we propose a method using the log of past queries for ranking and suggesting query extensions as a user types a query, and identify emerging issues to be addressed.
    Series
    Communications in computer and information science; 478
    Source
    Metadata and semantics research: 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings. Eds.: S. Closs et al
  17. Chowdhury, S.; Chowdhury, G.G.: Using DDC to create a visual knowledge map as an aid to online information retrieval (2004) 0.00
    0.0010731288 = product of:
      0.009658159 = sum of:
        0.005664805 = weight(_text_:in in 2643) [ClassicSimilarity], result of:
          0.005664805 = score(doc=2643,freq=20.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.19010136 = fieldWeight in 2643, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2643)
        0.003993354 = product of:
          0.011980061 = sum of:
            0.011980061 = weight(_text_:29 in 2643) [ClassicSimilarity], result of:
              0.011980061 = score(doc=2643,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.15546128 = fieldWeight in 2643, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2643)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Selection of search terms in an online search environment can be facilitated by the visual display of a knowledge map showing the various concepts and their links. This paper reports an a preliminary research aimed at designing a prototype knowledge map using DDC and its visual display. The prototype knowledge map created using the Protégé and TGViz freeware has been demonstrated, and further areas of research in this field are discussed.
    Content
    1. Introduction Web search engines and digital libraries usually expect the users to use search terms that most accurately represent their information needs. Finding the most appropriate search terms to represent an information need is an age old problem in information retrieval. Keyword or phrase search may produce good search results as long as the search terms or phrase(s) match those used by the authors and have been chosen for indexing by the concerned information retrieval system. Since this does not always happen, a large number of false drops are produced by information retrieval systems. The retrieval results become worse in very large systems that deal with millions of records, such as the Web search engines and digital libraries. Vocabulary control tools are used to improve the performance of text retrieval systems. Thesauri, the most common type of vocabulary control tool used in information retrieval, appeared in the late fifties, designed for use with the emerging post-coordinate indexing systems of that time. They are used to exert terminology control in indexing, and to aid in searching by allowing the searcher to select appropriate search terms. A large volume of literature exists describing the design features, and experiments with the use, of thesauri in various types of information retrieval systems (see for example, Furnas et.al., 1987; Bates, 1986, 1998; Milstead, 1997, and Shiri et al., 2002).
    Date
    29. 8.2004 13:37:50
    Series
    Advances in knowledge organization; vol.9
  18. Wu, I.-C.; Vakkari, P.: Supporting navigation in Wikipedia by information visualization : extended evaluation measures (2014) 0.00
    0.0010408289 = product of:
      0.009367459 = sum of:
        0.0053741056 = weight(_text_:in in 1797) [ClassicSimilarity], result of:
          0.0053741056 = score(doc=1797,freq=18.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.18034597 = fieldWeight in 1797, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1797)
        0.003993354 = product of:
          0.011980061 = sum of:
            0.011980061 = weight(_text_:29 in 1797) [ClassicSimilarity], result of:
              0.011980061 = score(doc=1797,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.15546128 = fieldWeight in 1797, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1797)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Purpose - The authors introduce two semantics-based navigation applications that facilitate information-seeking activities in internal link-based web sites in Wikipedia. These applications aim to help users find concepts within a topic and related articles on a given topic quickly and then gain topical knowledge from internal link-based encyclopedia web sites. The paper aims to discuss these issues. Design/methodology/approach - The WNavis application consists of three information visualization (IV) tools which are a topic network, a hierarchy topic tree and summaries for topics. The WikiMap application consists of a topic network. The goal of the topic network and topic tree tools is to help users to find the major concepts of a topic and identify relationships between these major concepts easily. In addition, in order to locate specific information and enable users to explore and read topic-related articles quickly, the topic tree and summaries for topics tools support users to gain topical knowledge quickly. The authors then apply the k-clique of cohesive indicator to analyze the sub topics of the seed query and find out the best clustering results via the cosine measure. The authors utilize four metrics, which are correctness, time cost, usage behaviors, and satisfaction, to evaluate the three interfaces. These metrics measure both the outputs and outcomes of applications. As a baseline system for evaluation the authors used a traditional Wikipedia interface. For the evaluation, the authors used an experimental user study with 30 participants.
    Findings - The results indicate that both WikiMap and WNavis supported users to identify concepts and their relations better compared to the baseline. In topical tasks WNavis over performed both WikiMap and the baseline system. Although there were no time differences in finding concepts or answering topical questions, the test systems provided users with a greater gain per time unit. The users of WNavis leaned on the hierarchy tree instead of other tools, whereas WikiMap users used the topic map. Research limitations/implications - The findings have implications for the design of IR support tools in knowledge-intensive web sites that help users to explore topics and concepts. Originality/value - The authors explored to what extent the use of each IV support tool contributed to successful exploration of topics in search tasks. The authors propose extended task-based evaluation measures to understand how each application provides useful context for users to accomplish the tasks and attain the search goals. That is, the authors not only evaluate the output of the search results, e.g. the number of relevant items retrieved, but also the outcome provided by the system for assisting users to attain the search goal.
    Date
    8. 4.2015 16:20:29
  19. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.00
    0.0010368375 = product of:
      0.009331537 = sum of:
        0.0053741056 = weight(_text_:in in 2659) [ClassicSimilarity], result of:
          0.0053741056 = score(doc=2659,freq=18.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.18034597 = fieldWeight in 2659, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2659)
        0.0039574313 = product of:
          0.011872293 = sum of:
            0.011872293 = weight(_text_:22 in 2659) [ClassicSimilarity], result of:
              0.011872293 = score(doc=2659,freq=2.0), product of:
                0.076713994 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021906832 = queryNorm
                0.15476047 = fieldWeight in 2659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2659)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    The Library of Congress Subject Headings (LCSH) has been developed over the course of more than a century, predating the semantic web by some time. Until the 1986, the only concept-toconcept relationship available was an undifferentiated "See Also" reference, which was used for both associative (RT) and hierarchical (BT/NT) connections. In that year, in preparation for the first release of the headings in machine readable MARC Authorities form, an attempt was made to automatically convert these "See Also" links into the standardized thesaural relations. Unfortunately, the rule used to determine the type of reference to generate relied on the presence of symmetric links to detect associatively related terms; "See Also" references that were only present in one of the related terms were assumed to be hierarchical. This left the process vulnerable to inconsistent use of references in the pre-conversion data, with a marked bias towards promoting relationships to hierarchical status. The Library of Congress was aware that the results of the conversion contained many inconsistencies, and intended to validate and correct the results over the course of time. Unfortunately, twenty years later, less than 40% of the converted records have been evaluated. The converted records, being the earliest encountered during the Library's cataloging activities, represent the most basic concepts within LCSH; errors in the syndetic structure for these records affect far more subordinate concepts than those nearer the periphery. Worse, a policy of patterning new headings after pre-existing ones leads to structural errors arising from the conversion process being replicated in these newer headings, perpetuating and exacerbating the errors. As the LCSH prepares for its second great conversion, from MARC to SKOS, it is critical to address these structural problems. As part of the work on converting the headings into SKOS, I have experimented with different visualizations of the tangled web of broader terms embedded in LCSH. This poster illustrates several of these renderings, shows how they can help users to judge which relationships might not be correct, and shows just exactly how Doorbells and Mammals are related.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  20. Mercun, T.; Zumer, M.; Aalberg, T.: Presenting bibliographic families : Designing an FRBR-based prototype using information visualization (2016) 0.00
    9.855689E-4 = product of:
      0.008870119 = sum of:
        0.003878427 = weight(_text_:in in 2879) [ClassicSimilarity], result of:
          0.003878427 = score(doc=2879,freq=6.0), product of:
            0.029798867 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021906832 = queryNorm
            0.1301535 = fieldWeight in 2879, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2879)
        0.0049916925 = product of:
          0.0149750775 = sum of:
            0.0149750775 = weight(_text_:29 in 2879) [ClassicSimilarity], result of:
              0.0149750775 = score(doc=2879,freq=2.0), product of:
                0.077061385 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.021906832 = queryNorm
                0.19432661 = fieldWeight in 2879, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2879)
          0.33333334 = coord(1/3)
      0.11111111 = coord(2/18)
    
    Abstract
    Purpose - Despite the importance of bibliographic information systems for discovering and exploring library resources, some of the core functionality that should be provided to support users in their information seeking process is still missing. Investigating these issues, the purpose of this paper is to design a solution that would fulfil the missing objectives. Design/methodology/approach - Building on the concepts of a work family, functional requirements for bibliographic records (FRBR) and information visualization, the paper proposes a model and user interface design that could support a more efficient and user-friendly presentation and navigation in bibliographic information systems. Findings - The proposed design brings together all versions of a work, related works, and other works by and about the author and shows how the model was implemented into a FrbrVis prototype system using hierarchical visualization layout. Research limitations/implications - Although issues related to discovery and exploration apply to various material types, the research first focused on works of fiction and was also limited by the selected sample of records. Practical implications - The model for presenting and interacting with FRBR-based data can serve as a good starting point for future developments and implementations. Originality/value - With FRBR concepts being gradually integrated into cataloguing rules, formats, and various bibliographic services, one of the important questions that has not really been investigated and studied is how the new type of data would be presented to users in a way that would exploit the true potential of the changes.
    Date
    15. 4.2016 14:29:41

Years

Types

  • el 20
  • b 1
  • More… Less…