Search (114 results, page 1 of 6)

  • × language_ss:"e"
  • × theme_ss:"Visualisierung"
  • × type_ss:"a"
  1. Bekavac, B.; Herget, J.; Hierl, S.; Öttl, S.: Visualisierungskomponenten bei webbasierten Suchmaschinen : Methoden, Kriterien und ein Marktüberblick (2007) 0.02
    0.016708793 = product of:
      0.050126377 = sum of:
        0.006246961 = weight(_text_:in in 399) [ClassicSimilarity], result of:
          0.006246961 = score(doc=399,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10520181 = fieldWeight in 399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
        0.043879416 = weight(_text_:und in 399) [ClassicSimilarity], result of:
          0.043879416 = score(doc=399,freq=14.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.4535172 = fieldWeight in 399, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
      0.33333334 = coord(2/6)
    
    Abstract
    Bei webbasierten Suchmaschinen werden zunehmend auch Systeme mit Visualisierungskomponenten für die Ergebnisrepräsentation angeboten. Die Ansätze der Visualisierungen unterscheiden sich hierbei in der Zielsetzung und Ausführung deutlich voneinander. Der folgende Beitrag beschreibt die verwendeten Visualisierungsmethoden, systematisiert diese anhand einer Klassifikation, stellt die führenden frei zugänglichen Systeme vor und vergleicht diese anhand der Kriterien aus der Systematisierung. Die typischen Problemfelder werden identifiziert und die wichtigsten Gemeinsamkeiten und Unterschiede der untersuchten Systeme herausgearbeitet. Die Vorstellung zweier innovativer Visualisierungskonzepte im Bereich der Relationenvisualisierung innerhalb von Treffermengen und der Visualisierung von Relationen bei der Suche nach Musik schließen den Beitrag ab.
    Source
    Information - Wissenschaft und Praxis. 58(2007) H.3, S.149-158
  2. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.01
    0.014064478 = product of:
      0.04219343 = sum of:
        0.012620768 = weight(_text_:in in 2755) [ClassicSimilarity], result of:
          0.012620768 = score(doc=2755,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 2755, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2755)
        0.029572664 = product of:
          0.059145328 = sum of:
            0.059145328 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.059145328 = score(doc=2755,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    1. 2.2016 18:25:22
    Series
    Lecture notes in computer science ; 9398
  3. Neubauer, G.: Visualization of typed links in linked data (2017) 0.01
    0.010473757 = product of:
      0.03142127 = sum of:
        0.007728611 = weight(_text_:in in 3912) [ClassicSimilarity], result of:
          0.007728611 = score(doc=3912,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 3912, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.02369266 = weight(_text_:und in 3912) [ClassicSimilarity], result of:
          0.02369266 = score(doc=3912,freq=8.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.24487628 = fieldWeight in 3912, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
      0.33333334 = coord(2/6)
    
    Abstract
    Das Themengebiet der Arbeit behandelt Visualisierungen von typisierten Links in Linked Data. Die wissenschaftlichen Gebiete, die im Allgemeinen den Inhalt des Beitrags abgrenzen, sind das Semantic Web, das Web of Data und Informationsvisualisierung. Das Semantic Web, das von Tim Berners Lee 2001 erfunden wurde, stellt eine Erweiterung zum World Wide Web (Web 2.0) dar. Aktuelle Forschungen beziehen sich auf die Verknüpfbarkeit von Informationen im World Wide Web. Um es zu ermöglichen, solche Verbindungen wahrnehmen und verarbeiten zu können sind Visualisierungen die wichtigsten Anforderungen als Hauptteil der Datenverarbeitung. Im Zusammenhang mit dem Sematic Web werden Repräsentationen von zusammenhängenden Informationen anhand von Graphen gehandhabt. Der Grund des Entstehens dieser Arbeit ist in erster Linie die Beschreibung der Gestaltung von Linked Data-Visualisierungskonzepten, deren Prinzipien im Rahmen einer theoretischen Annäherung eingeführt werden. Anhand des Kontexts führt eine schrittweise Erweiterung der Informationen mit dem Ziel, praktische Richtlinien anzubieten, zur Vernetzung dieser ausgearbeiteten Gestaltungsrichtlinien. Indem die Entwürfe zweier alternativer Visualisierungen einer standardisierten Webapplikation beschrieben werden, die Linked Data als Netzwerk visualisiert, konnte ein Test durchgeführt werden, der deren Kompatibilität zum Inhalt hatte. Der praktische Teil behandelt daher die Designphase, die Resultate, und zukünftige Anforderungen des Projektes, die durch die Testung ausgearbeitet wurden.
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 70(2017) H.2, S.179-199
  4. Pfeffer, M.; Eckert, K.; Stuckenschmidt, H.: Visual analysis of classification systems and library collections (2008) 0.01
    0.010439968 = product of:
      0.0313199 = sum of:
        0.012365777 = weight(_text_:in in 317) [ClassicSimilarity], result of:
          0.012365777 = score(doc=317,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2082456 = fieldWeight in 317, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=317)
        0.018954126 = weight(_text_:und in 317) [ClassicSimilarity], result of:
          0.018954126 = score(doc=317,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.19590102 = fieldWeight in 317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=317)
      0.33333334 = coord(2/6)
    
    Abstract
    In this demonstration we present a visual analysis approach that addresses both developers and users of hierarchical classification systems. The approach supports an intuitive understanding of the structure and current use in relation to a specific collection. We will also demonstrate its application for the development and management of library collections.
    Series
    Lecture notes in computer science ; 5173
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  5. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.01
    0.010286495 = product of:
      0.030859483 = sum of:
        0.013115887 = weight(_text_:in in 3693) [ClassicSimilarity], result of:
          0.013115887 = score(doc=3693,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 3693, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3693)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.035487194 = score(doc=3693,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Generally, Computer Science (CS) classifications are inconsistent in taxonomy strategies. t is necessary to develop CS taxonomy research to combine its historical perspective, its current knowledge and its predicted future trends - including all breakthroughs in information and communication technology. In this paper we have analyzed the ACM Computing Classification System (CCS) by means of visualization maps. The important achievement of current work is an effective visualization of classified documents from the ACM Digital Library. From the technical point of view, the innovation lies in the parallel use of analysis units: (sub)classes and keywords as well as a spherical 3D information surface. We have compared both the thematic and semantic maps of classified documents and results presented in Table 1. Furthermore, the proposed new method is used for content-related evaluation of the original scheme. Summing up: we improved an original ACM classification in the Computer Science domain by means of visualization.
    Date
    22. 7.2010 19:36:46
  6. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.01
    0.0091357 = product of:
      0.027407099 = sum of:
        0.012620768 = weight(_text_:in in 5272) [ClassicSimilarity], result of:
          0.012620768 = score(doc=5272,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.21253976 = fieldWeight in 5272, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5272)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
              0.029572664 = score(doc=5272,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 5272, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5272)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
    Date
    22. 7.2006 16:11:05
  7. Eckert, K.; Pfeffer, M.; Stuckenschmidt, H.: Assessing thesaurus-based annotations for semantic search applications (2008) 0.01
    0.0091349725 = product of:
      0.027404916 = sum of:
        0.010820055 = weight(_text_:in in 1528) [ClassicSimilarity], result of:
          0.010820055 = score(doc=1528,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1822149 = fieldWeight in 1528, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1528)
        0.01658486 = weight(_text_:und in 1528) [ClassicSimilarity], result of:
          0.01658486 = score(doc=1528,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.17141339 = fieldWeight in 1528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1528)
      0.33333334 = coord(2/6)
    
    Abstract
    Statistical methods for automated document indexing are becoming an alternative to the manual assignment of keywords. We argue that the quality of the thesaurus used as a basis for indexing in regard to its ability to adequately cover the contents to be indexed and as a basis for the specific indexing method used is of crucial importance in automatic indexing. We present an interactive tool for thesaurus evaluation that is based on a combination of statistical measures and appropriate visualisation techniques that supports the detection of potential problems in a thesaurus. We describe the methods used and show that the tool supports the detection and correction of errors, leading to a better indexing result.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  8. Kraker, P.; Kittel, C,; Enkhbayar, A.: Open Knowledge Maps : creating a visual interface to the world's scientific knowledge based on natural language processing (2016) 0.01
    0.0091104945 = product of:
      0.027331483 = sum of:
        0.013115887 = weight(_text_:in in 3205) [ClassicSimilarity], result of:
          0.013115887 = score(doc=3205,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 3205, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
        0.014215595 = weight(_text_:und in 3205) [ClassicSimilarity], result of:
          0.014215595 = score(doc=3205,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 3205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
      0.33333334 = coord(2/6)
    
    Abstract
    The goal of Open Knowledge Maps is to create a visual interface to the world's scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps.
    Content
    Beitrag in einem Themenschwerpunkt 'Computerlinguistik und Bibliotheken'. Vgl.: http://0277.ch/ojs/index.php/cdrs_0277/article/view/157/355.
  9. Batorowska, H.; Kaminska-Czubala, B.: Information retrieval support : visualisation of the information space of a document (2014) 0.01
    0.0082332585 = product of:
      0.024699774 = sum of:
        0.0128707085 = weight(_text_:in in 1444) [ClassicSimilarity], result of:
          0.0128707085 = score(doc=1444,freq=26.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2167489 = fieldWeight in 1444, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1444)
        0.011829065 = product of:
          0.02365813 = sum of:
            0.02365813 = weight(_text_:22 in 1444) [ClassicSimilarity], result of:
              0.02365813 = score(doc=1444,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.15476047 = fieldWeight in 1444, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1444)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Acquiring knowledge in any field involves information retrieval, i.e. searching the available documents to identify answers to the queries concerning the selected objects. Knowing the keywords which are names of the objects will enable situating the user's query in the information space organized as a thesaurus or faceted classification. Objectives: Identification the areas in the information space which correspond to gaps in the user's personal knowledge or in the domain knowledge might become useful in theory or practice. The aim of this paper is to present a realistic information-space model of a self-authored full-text document on information culture, indexed by the author of this article. Methodology: Having established the relations between the terms, particular modules (sets of terms connected by relations used in facet classification) are situated on a plain, similarly to a communication map. Conclusions drawn from the "journey" on the map, which is a visualization of the knowledge contained in the analysed document, are the crucial part of this paper. Results: The direct result of the research is the created model of information space visualization of a given document (book, article, website). The proposed procedure can practically be used as a new form of representation in order to map the contents of academic books and articles, beside the traditional index form, especially as an e-book auxiliary tool. In teaching, visualization of the information space of a document can be used to help students understand the issues of: classification, categorization and representation of new knowledge emerging in human mind.
    Series
    Advances in knowledge organization; vol. 14
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  10. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.01
    0.007512714 = product of:
      0.02253814 = sum of:
        0.010709076 = weight(_text_:in in 2659) [ClassicSimilarity], result of:
          0.010709076 = score(doc=2659,freq=18.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 2659, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2659)
        0.011829065 = product of:
          0.02365813 = sum of:
            0.02365813 = weight(_text_:22 in 2659) [ClassicSimilarity], result of:
              0.02365813 = score(doc=2659,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.15476047 = fieldWeight in 2659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2659)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The Library of Congress Subject Headings (LCSH) has been developed over the course of more than a century, predating the semantic web by some time. Until the 1986, the only concept-toconcept relationship available was an undifferentiated "See Also" reference, which was used for both associative (RT) and hierarchical (BT/NT) connections. In that year, in preparation for the first release of the headings in machine readable MARC Authorities form, an attempt was made to automatically convert these "See Also" links into the standardized thesaural relations. Unfortunately, the rule used to determine the type of reference to generate relied on the presence of symmetric links to detect associatively related terms; "See Also" references that were only present in one of the related terms were assumed to be hierarchical. This left the process vulnerable to inconsistent use of references in the pre-conversion data, with a marked bias towards promoting relationships to hierarchical status. The Library of Congress was aware that the results of the conversion contained many inconsistencies, and intended to validate and correct the results over the course of time. Unfortunately, twenty years later, less than 40% of the converted records have been evaluated. The converted records, being the earliest encountered during the Library's cataloging activities, represent the most basic concepts within LCSH; errors in the syndetic structure for these records affect far more subordinate concepts than those nearer the periphery. Worse, a policy of patterning new headings after pre-existing ones leads to structural errors arising from the conversion process being replicated in these newer headings, perpetuating and exacerbating the errors. As the LCSH prepares for its second great conversion, from MARC to SKOS, it is critical to address these structural problems. As part of the work on converting the headings into SKOS, I have experimented with different visualizations of the tangled web of broader terms embedded in LCSH. This poster illustrates several of these renderings, shows how they can help users to judge which relationships might not be correct, and shows just exactly how Doorbells and Mammals are related.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  11. Wu, I.-C.; Vakkari, P.: Effects of subject-oriented visualization tools on search by novices and intermediates (2018) 0.01
    0.007504981 = product of:
      0.022514943 = sum of:
        0.007728611 = weight(_text_:in in 4573) [ClassicSimilarity], result of:
          0.007728611 = score(doc=4573,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 4573, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4573)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 4573) [ClassicSimilarity], result of:
              0.029572664 = score(doc=4573,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 4573, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4573)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This study explores how user subject knowledge influences search task processes and outcomes, as well as how search behavior is influenced by subject-oriented information visualization (IV) tools. To enable integrated searches, the proposed WikiMap + integrates search functions and IV tools (i.e., a topic network and hierarchical topic tree) and gathers information from Wikipedia pages and Google Search results. To evaluate the effectiveness of the proposed interfaces, we design subject-oriented tasks and adopt extended evaluation measures. We recruited 48 novices and 48 knowledgeable users, that is, intermediates, for the evaluation. Our results show that novices using the proposed interface demonstrate better search performance than intermediates using Wikipedia. We therefore conclude that our tools help close the gap between novices and intermediates in information searches. The results also show that intermediates can take advantage of the search tool by leveraging the IV tools to browse subtopics, and formulate better queries with less effort. We conclude that embedding the IV and the search tools in the interface can result in different search behavior but improved task performance. We provide implications to design search systems to include IV features adapted to user levels of subject knowledge to help them achieve better task performance.
    Date
    9.12.2018 16:22:25
  12. Wu, K.-C.; Hsieh, T.-Y.: Affective choosing of clustering and categorization representations in e-book interfaces (2016) 0.01
    0.007032239 = product of:
      0.021096716 = sum of:
        0.006310384 = weight(_text_:in in 3070) [ClassicSimilarity], result of:
          0.006310384 = score(doc=3070,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 3070, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3070)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 3070) [ClassicSimilarity], result of:
              0.029572664 = score(doc=3070,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 3070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3070)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - The purpose of this paper is to investigate user experiences with a touch-wall interface featuring both clustering and categorization representations of available e-books in a public library to understand human information interactions under work-focused and recreational contexts. Design/methodology/approach - Researchers collected questionnaires from 251 New Taipei City Library visitors who used the touch-wall interface to search for new titles. The authors applied structural equation modelling to examine relationships among hedonic/utilitarian needs, clustering and categorization representations, perceived ease of use (EU) and the extent to which users experienced anxiety and uncertainty (AU) while interacting with the interface. Findings - Utilitarian users who have an explicit idea of what they intend to find tend to prefer the categorization interface. A hedonic-oriented user tends to prefer clustering interfaces. Users reported EU regardless of which interface they engaged with. Results revealed that use of the clustering interface had a negative correlation with AU. Users that seek to satisfy utilitarian needs tended to emphasize the importance of perceived EU, whilst pleasure-seeking users were a little more tolerant of anxiety or uncertainty. Originality/value - The Online Public Access Catalogue (OPAC) encourages library visitors to borrow digital books through the implementation of an information visualization system. This situation poses an opportunity to validate uses and gratification theory. People with hedonic/utilitarian needs displayed different risk-control attitudes and affected uncertainty using the interface. Knowledge about user interaction with such interfaces is vital when launching the development of a new OPAC.
    Date
    20. 1.2015 18:30:22
  13. Osinska, V.; Kowalska, M.; Osinski, Z.: ¬The role of visualization in the shaping and exploration of the individual information space : part 1 (2018) 0.01
    0.007032239 = product of:
      0.021096716 = sum of:
        0.006310384 = weight(_text_:in in 4641) [ClassicSimilarity], result of:
          0.006310384 = score(doc=4641,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.10626988 = fieldWeight in 4641, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4641)
        0.014786332 = product of:
          0.029572664 = sum of:
            0.029572664 = weight(_text_:22 in 4641) [ClassicSimilarity], result of:
              0.029572664 = score(doc=4641,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.19345059 = fieldWeight in 4641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4641)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Studies on the state and structure of digital knowledge concerning science generally relate to macro and meso scales. Supported by visualizations, these studies can deliver knowledge about emerging scientific fields or collaboration between countries, scientific centers, or groups of researchers. Analyses of individual activities or single scientific career paths are rarely presented and discussed. The authors decided to fill this gap and developed a web application for visualizing the scientific output of particular researchers. This free software based on bibliographic data from local databases, provides six layouts for analysis. Researchers can see the dynamic characteristics of their own writing activity, the time and place of publication, and the thematic scope of research problems. They can also identify cooperation networks, and consequently, study the dependencies and regularities in their own scientific activity. The current article presents the results of a study of the application's usability and functionality as well as attempts to define different user groups. A survey about the interface was sent to select researchers employed at Nicolaus Copernicus University. The results were used to answer the question as to whether such a specialized visualization tool can significantly augment the individual information space of the contemporary researcher.
    Date
    21.12.2018 17:22:13
  14. Choi, I.: Visualizations of cross-cultural bibliographic classification : comparative studies of the Korean Decimal Classification and the Dewey Decimal Classification (2017) 0.00
    0.0027826177 = product of:
      0.016695706 = sum of:
        0.016695706 = weight(_text_:in in 3869) [ClassicSimilarity], result of:
          0.016695706 = score(doc=3869,freq=28.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2811637 = fieldWeight in 3869, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3869)
      0.16666667 = coord(1/6)
    
    Abstract
    The changes in KO systems induced by sociocultural influences may include those in both classificatory principles and cultural features. The proposed study will examine the Korean Decimal Classification (KDC)'s adaptation of the Dewey Decimal Classification (DDC) by comparing the two systems. This case manifests the sociocultural influences on KOSs in a cross-cultural context. Therefore, the study aims at an in-depth investigation of sociocultural influences by situating a KOS in a cross-cultural environment and examining the dynamics between two classification systems designed to organize information resources in two distinct sociocultural contexts. As a preceding stage of the comparison, the analysis was conducted on the changes that result from the meeting of different sociocultural feature in a descriptive method. The analysis aims to identify variations between the two schemes in comparison of the knowledge structures of the two classifications, in terms of the quantity of class numbers that represent concepts and their relationships in each of the individual main classes. The most effective analytic strategy to show the patterns of the comparison was visualizations of similarities and differences between the two systems. Increasing or decreasing tendencies in the class through various editions were analyzed. Comparing the compositions of the main classes and distributions of concepts in the KDC and DDC discloses the differences in their knowledge structures empirically. This phase of quantitative analysis and visualizing techniques generates empirical evidence leading to interpretation.
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  15. Maaten, L. van den: Learning a parametric embedding by preserving local structure (2009) 0.00
    0.0027546515 = product of:
      0.016527908 = sum of:
        0.016527908 = weight(_text_:in in 3883) [ClassicSimilarity], result of:
          0.016527908 = score(doc=3883,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.27833787 = fieldWeight in 3883, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3883)
      0.16666667 = coord(1/6)
    
    Abstract
    The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.
  16. Hajdu Barát, A.: Usability and the user interfaces of classical information retrieval languages (2006) 0.00
    0.0025503114 = product of:
      0.015301868 = sum of:
        0.015301868 = weight(_text_:in in 232) [ClassicSimilarity], result of:
          0.015301868 = score(doc=232,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2576908 = fieldWeight in 232, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=232)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper examines some traditional information searching methods and their role in Hungarian OPACs. What challenges are there in the digital and online environment? How do users work with them and do they give users satisfactory results? What kinds of techniques are users employing? In this paper I examine the user interfaces of UDC, thesauri, subject headings etc. in the Hungarian library. The key question of the paper is whether a universal system or local solutions is the best approach for searching in the digital environment.
    Series
    Advances in knowledge organization; vol.10
  17. Yi, K.; Chan, L.M.: ¬A visualization software tool for Library of Congress Subject Headings (2008) 0.00
    0.0025241538 = product of:
      0.015144923 = sum of:
        0.015144923 = weight(_text_:in in 2503) [ClassicSimilarity], result of:
          0.015144923 = score(doc=2503,freq=16.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.25504774 = fieldWeight in 2503, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2503)
      0.16666667 = coord(1/6)
    
    Content
    The aim of this study is to develop a software tool, VisuaLCSH, for effective searching, browsing, and maintenance of LCSH. This tool enables visualizing subject headings and hierarchical structures implied and embedded in LCSH. A conceptual framework for converting the hierarchical structure of headings in LCSH to an explicit tree structure is proposed, described, and implemented. The highlights of VisuaLCSH are summarized below: 1) revealing multiple aspects of a heading; 2) normalizing the hierarchical relationships in LCSH; 3) showing multi-level hierarchies in LCSH sub-trees; 4) improving the navigational function of LCSH in retrieval; and 5) enabling the implementation of generic search, i.e., the 'exploding' feature, in searching LCSH.
    Series
    Advances in knowledge organization; vol.11
    Source
    Culture and identity in knowledge organization: Proceedings of the Tenth International ISKO Conference 5-8 August 2008, Montreal, Canada. Ed. by Clément Arsenault and Joseph T. Tennis
  18. Rolling, L.: ¬The role of graphic display of concept relationships in indexing and retrieval vocabularies (1985) 0.00
    0.0024530364 = product of:
      0.014718218 = sum of:
        0.014718218 = weight(_text_:in in 3646) [ClassicSimilarity], result of:
          0.014718218 = score(doc=3646,freq=34.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.24786183 = fieldWeight in 3646, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3646)
      0.16666667 = coord(1/6)
    
    Abstract
    The use of diagrams to express relationships in classification is not new. Many classificationists have used this approach, but usually in a minor display to make a point or for part of a difficult relational situation. Ranganathan, for example, used diagrams for some of his more elusive concepts. The thesaurus in particular and subject headings in general, with direct and indirect crossreferences or equivalents, need many more diagrams than normally are included to make relationships and even semantics clear. A picture very often is worth a thousand words. Rolling has used directed graphs (arrowgraphs) to join terms as a practical method for rendering relationships between indexing terms lucid. He has succeeded very weIl in this endeavor. Four diagrams in this selection are all that one needs to explain how to employ the system; from initial listing to completed arrowgraph. The samples of his work include illustration of off-page connectors between arrowgraphs. The great advantage to using diagrams like this is that they present relations between individual terms in a format that is easy to comprehend. But of even greater value is the fact that one can use his arrowgraphs as schematics for making three-dimensional wire-and-ball models, in which the relationships may be seen even more clearly. In fact, errors or gaps in relations are much easier to find with this methodology. One also can get across the notion of the threedimensionality of classification systems with such models. Pettee's "hand reaching up and over" (q.v.) is not a figment of the imagination. While the actual hand is a wire or stick, the concept visualized is helpful in illuminating the three-dimensional figure that is latent in all systems that have cross-references or "broader," "narrower," or, especially, "related" terms. Classification schedules, being hemmed in by the dimensions of the printed page, also benefit from such physical illustrations. Rolling, an engineer by conviction, was the developer of information systems for the Cobalt Institute, the European Atomic Energy Community, and European Coal and Steel Community. He also developed and promoted computer-aided translation at the Commission of the European Communities in Luxembourg. One of his objectives has always been to increase the efficiency of mono- and multilingual thesauri for use in multinational information systems.
    Footnote
    Original in: Classification research: Proceedings of the Second International Study Conference held at Hotel Prins Hamlet, Elsinore, Denmark, 14th-18th Sept. 1964. Ed.: Pauline Atherton. Copenhagen: Munksgaard 1965. S.295-310.
  19. Large, J.A.; Beheshti, J.: Interface design, Web portals, and children (2005) 0.00
    0.0023611297 = product of:
      0.014166778 = sum of:
        0.014166778 = weight(_text_:in in 5547) [ClassicSimilarity], result of:
          0.014166778 = score(doc=5547,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.23857531 = fieldWeight in 5547, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5547)
      0.16666667 = coord(1/6)
    
    Abstract
    Children seek information in order to complete school projects on a wide variety of topics, as well as to support their various leisure activities. Such information can be found in print documents, but increasingly young people are turning to the Web to meet their information needs. In order to exploit this resource, however, children must be able to search or browse digital information through the intermediation of an interface. In particular, they must use Web-based portals that in most cases have been designed for adult users. Guidelines for interface design are not hard to find, but typically they also postulate adult rather than juvenile users. The authors discuss their own research work that has focused upon what young people themselves have to say about the design of portal interfaces. They conclude that specific interface design guidelines are required for young users rather than simply relying upon general design guidelines, and that in order to formulate such guidelines it is necessary to actively include the young people themselves in this process.
  20. Eito Brun, R.: Retrieval effectiveness in software repositories : from faceted classifications to software visualization techniques (2006) 0.00
    0.0023517415 = product of:
      0.014110449 = sum of:
        0.014110449 = weight(_text_:in in 2515) [ClassicSimilarity], result of:
          0.014110449 = score(doc=2515,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2376267 = fieldWeight in 2515, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2515)
      0.16666667 = coord(1/6)
    
    Abstract
    The internal organization of large software projects requires an extraordinary effort in the development and maintenance of repositories made up of software artifacts (business components, data models, functional and technical documentation, etc.). During the software development process, different artifacts are created to help users in the transfer of knowledge and enable communication between workers and teams. The storage, maintenance and publication of these artifacts in knowledge bases - usually referred to as "software repositories" are a useful tool for future software development projects, as they contain the collective, learned experience of the teams and provide the basis to estimate and reuse the work completed in the past. Different techniques similar to those used by the library community have been used in the past to organize these software repositories and help users in the difficult task or identifying and retrieving artifacts (software and documentation). These techniques include software classification - with a special emphasis on faceted classifications, keyword-based retrieval and formal method techniques. The paper discusses the different knowledge organization techniques applied in these repositories to identify and retrieve software artifacts and ensure the reusability of software components and documentation at the different phases of the development process across different projects. An enumeration of the main approaches documented in specialized bibliography is provided.
    Series
    Advances in knowledge organization; vol.10

Years

Types

  • el 20
  • b 1
  • More… Less…