Search (14 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Visualisierung"
  • × type_ss:"el"
  1. Kraker, P.; Kittel, C,; Enkhbayar, A.: Open Knowledge Maps : creating a visual interface to the world's scientific knowledge based on natural language processing (2016) 0.01
    0.007380123 = product of:
      0.05535092 = sum of:
        0.033011325 = weight(_text_:software in 3205) [ClassicSimilarity], result of:
          0.033011325 = score(doc=3205,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.2629875 = fieldWeight in 3205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
        0.022339594 = weight(_text_:web in 3205) [ClassicSimilarity], result of:
          0.022339594 = score(doc=3205,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.21634221 = fieldWeight in 3205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
      0.13333334 = coord(2/15)
    
    Abstract
    The goal of Open Knowledge Maps is to create a visual interface to the world's scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps.
  2. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.01
    0.006116273 = product of:
      0.045872044 = sum of:
        0.033011325 = weight(_text_:software in 1289) [ClassicSimilarity], result of:
          0.033011325 = score(doc=1289,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.2629875 = fieldWeight in 1289, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.046875 = fieldNorm(doc=1289)
        0.01286072 = product of:
          0.02572144 = sum of:
            0.02572144 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.02572144 = score(doc=1289,freq=2.0), product of:
                0.110801086 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.031640913 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.13333334 = coord(2/15)
    
    Abstract
    QVIZ will research and create a framework for visualizing and querying archival resources by a time-space interface based on maps and emergent knowledge structures. The framework will also integrate social software, such as wikis, in order to utilize knowledge in existing and new communities of practice. QVIZ will lead to improved information sharing and knowledge creation, easier access to information in a user-adapted context and innovative ways of exploring and visualizing materials over time, between countries and other administrative units. The common European framework for sharing and accessing archival information provided by the QVIZ project will open a considerably larger commercial market based on archival materials as well as a richer understanding of European history.
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  3. Munzner, T.: Interactive visualization of large graphs and networks (2000) 0.01
    0.005742603 = product of:
      0.04306952 = sum of:
        0.02200755 = weight(_text_:software in 4746) [ClassicSimilarity], result of:
          0.02200755 = score(doc=4746,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.17532499 = fieldWeight in 4746, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=4746)
        0.021061972 = weight(_text_:web in 4746) [ClassicSimilarity], result of:
          0.021061972 = score(doc=4746,freq=4.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.2039694 = fieldWeight in 4746, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=4746)
      0.13333334 = coord(2/15)
    
    Abstract
    Many real-world domains can be represented as large node-link graphs: backbone Internet routers connect with 70,000 other hosts, mid-sized Web servers handle between 20,000 and 200,000 hyperlinked documents, and dictionaries contain millions of words defined in terms of each other. Computational manipulation of such large graphs is common, but previous tools for graph visualization have been limited to datasets of a few thousand nodes. Visual depictions of graphs and networks are external representations that exploit human visual processing to reduce the cognitive load of many tasks that require understanding of global or local structure. We assert that the two key advantages of computer-based systems for information visualization over traditional paper-based visual exposition are interactivity and scalability. We also argue that designing visualization software by taking the characteristics of a target user's task domain into account leads to systems that are more effective and scale to larger datasets than previous work. This thesis contains a detailed analysis of three specialized systems for the interactive exploration of large graphs, relating the intended tasks to the spatial layout and visual encoding choices. We present two novel algorithms for specialized layout and drawing that use quite different visual metaphors. The H3 system for visualizing the hyperlink structures of web sites scales to datasets of over 100,000 nodes by using a carefully chosen spanning tree as the layout backbone, 3D hyperbolic geometry for a Focus+Context view, and provides a fluid interactive experience through guaranteed frame rate drawing. The Constellation system features a highly specialized 2D layout intended to spatially encode domain-specific information for computational linguists checking the plausibility of a large semantic network created from dictionaries. The Planet Multicast system for displaying the tunnel topology of the Internet's multicast backbone provides a literal 3D geographic layout of arcs on a globe to help MBone maintainers find misconfigured long-distance tunnels. Each of these three systems provides a very different view of the graph structure, and we evaluate their efficacy for the intended task. We generalize these findings in our analysis of the importance of interactivity and specialization for graph visualization systems that are effective and scalable.
  4. Eckert, K: ¬The ICE-map visualization (2011) 0.00
    0.004639485 = product of:
      0.06959227 = sum of:
        0.06959227 = weight(_text_:evaluation in 4743) [ClassicSimilarity], result of:
          0.06959227 = score(doc=4743,freq=4.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.5243376 = fieldWeight in 4743, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0625 = fieldNorm(doc=4743)
      0.06666667 = coord(1/15)
    
    Abstract
    In this paper, we describe in detail the Information Content Evaluation Map (ICE-Map Visualization, formerly referred to as IC Difference Analysis). The ICE-Map Visualization is a visual data mining approach for all kinds of concept hierarchies that uses statistics about the concept usage to help a user in the evaluation and maintenance of the hierarchy. It consists of a statistical framework that employs the the notion of information content from information theory, as well as a visualization of the hierarchy and the result of the statistical analysis by means of a treemap.
  5. Singh, A.; Sinha, U.; Sharma, D.k.: Semantic Web and data visualization (2020) 0.00
    0.0042123944 = product of:
      0.063185915 = sum of:
        0.063185915 = weight(_text_:web in 79) [ClassicSimilarity], result of:
          0.063185915 = score(doc=79,freq=36.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.6119082 = fieldWeight in 79, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=79)
      0.06666667 = coord(1/15)
    
    Abstract
    With the terrific growth of data volume and data being produced every second on millions of devices across the globe, there is a desperate need to manage the unstructured data available on web pages efficiently. Semantic Web or also known as Web of Trust structures the scattered data on the Internet according to the needs of the user. It is an extension of the World Wide Web (WWW) which focuses on manipulating web data on behalf of Humans. Due to the ability of the Semantic Web to integrate data from disparate sources and hence makes it more user-friendly, it is an emerging trend. Tim Berners-Lee first introduced the term Semantic Web and since then it has come a long way to become a more intelligent and intuitive web. Data Visualization plays an essential role in explaining complex concepts in a universal manner through pictorial representation, and the Semantic Web helps in broadening the potential of Data Visualization and thus making it an appropriate combination. The objective of this chapter is to provide fundamental insights concerning the semantic web technologies and in addition to that it also elucidates the issues as well as the solutions regarding the semantic web. The purpose of this chapter is to highlight the semantic web architecture in detail while also comparing it with the traditional search system. It classifies the semantic web architecture into three major pillars i.e. RDF, Ontology, and XML. Moreover, it describes different semantic web tools used in the framework and technology. It attempts to illustrate different approaches of the semantic web search engines. Besides stating numerous challenges faced by the semantic web it also illustrates the solutions.
    Theme
    Semantic Web
  6. Dushay, N.: Visualizing bibliographic metadata : a virtual (book) spine viewer (2004) 0.00
    0.0024000334 = product of:
      0.018000249 = sum of:
        0.0068304525 = product of:
          0.013660905 = sum of:
            0.013660905 = weight(_text_:online in 1197) [ClassicSimilarity], result of:
              0.013660905 = score(doc=1197,freq=4.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.142261 = fieldWeight in 1197, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1197)
          0.5 = coord(1/2)
        0.011169797 = weight(_text_:web in 1197) [ClassicSimilarity], result of:
          0.011169797 = score(doc=1197,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.108171105 = fieldWeight in 1197, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1197)
      0.13333334 = coord(2/15)
    
    Abstract
    User interfaces for digital information discovery often require users to click around and read a lot of text in order to find the text they want to read-a process that is often frustrating and tedious. This is exacerbated because of the limited amount of text that can be displayed on a computer screen. To improve the user experience of computer mediated information discovery, information visualization techniques are applied to the digital library context, while retaining traditional information organization concepts. In this article, the "virtual (book) spine" and the virtual spine viewer are introduced. The virtual spine viewer is an application which allows users to visually explore large information spaces or collections while also allowing users to hone in on individual resources of interest. The virtual spine viewer introduced here is an alpha prototype, presented to promote discussion and further work. Information discovery changed radically with the introduction of computerized library access catalogs, the World Wide Web and its search engines, and online bookstores. Yet few instances of these technologies provide a user experience analogous to walking among well-organized, well-stocked bookshelves-which many people find useful as well as pleasurable. To put it another way, many of us have heard or voiced complaints about the paucity of "online browsing"-but what does this really mean? In traditional information spaces such as libraries, often we can move freely among the books and other resources. When we walk among organized, labeled bookshelves, we get a sense of the information space-we take in clues, perhaps unconsciously, as to the scope of the collection, the currency of resources, the frequency of their use, etc. We also enjoy unexpected discoveries such as finding an interesting resource because library staff deliberately located it near similar resources, or because it was miss-shelved, or because we saw it on a bookshelf on the way to the water fountain.
  7. Cao, N.; Sun, J.; Lin, Y.-R.; Gotz, D.; Liu, S.; Qu, H.: FacetAtlas : Multifaceted visualization for rich text corpora (2010) 0.00
    0.0020503819 = product of:
      0.030755727 = sum of:
        0.030755727 = weight(_text_:evaluation in 3366) [ClassicSimilarity], result of:
          0.030755727 = score(doc=3366,freq=2.0), product of:
            0.13272417 = queryWeight, product of:
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.031640913 = queryNorm
            0.23172665 = fieldWeight in 3366, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1947007 = idf(docFreq=1811, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3366)
      0.06666667 = coord(1/15)
    
    Abstract
    Documents in rich text corpora usually contain multiple facets of information. For example, an article about a specific disease often consists of different facets such as symptom, treatment, cause, diagnosis, prognosis, and prevention. Thus, documents may have different relations based on different facets. Powerful search tools have been developed to help users locate lists of individual documents that are most related to specific keywords. However, there is a lack of effective analysis tools that reveal the multifaceted relations of documents within or cross the document clusters. In this paper, we present FacetAtlas, a multifaceted visualization technique for visually analyzing rich text corpora. FacetAtlas combines search technology with advanced visual analytical tools to convey both global and local patterns simultaneously. We describe several unique aspects of FacetAtlas, including (1) node cliques and multifaceted edges, (2) an optimized density map, and (3) automated opacity pattern enhancement for highlighting visual patterns, (4) interactive context switch between facets. In addition, we demonstrate the power of FacetAtlas through a case study that targets patient education in the health care domain. Our evaluation shows the benefits of this work, especially in support of complex multifaceted data analysis.
  8. Zhang, J.; Mostafa, J.; Tripathy, H.: Information retrieval by semantic analysis and visualization of the concept space of D-Lib® magazine (2002) 0.00
    0.002000028 = product of:
      0.015000209 = sum of:
        0.005692044 = product of:
          0.011384088 = sum of:
            0.011384088 = weight(_text_:online in 1211) [ClassicSimilarity], result of:
              0.011384088 = score(doc=1211,freq=4.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.11855084 = fieldWeight in 1211, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1211)
          0.5 = coord(1/2)
        0.009308165 = weight(_text_:web in 1211) [ClassicSimilarity], result of:
          0.009308165 = score(doc=1211,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.09014259 = fieldWeight in 1211, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1211)
      0.13333334 = coord(2/15)
    
    Abstract
    In this article we present a method for retrieving documents from a digital library through a visual interface based on automatically generated concepts. We used a vocabulary generation algorithm to generate a set of concepts for the digital library and a technique called the max-min distance technique to cluster them. Additionally, the concepts were visualized in a spring embedding graph layout to depict the semantic relationship among them. The resulting graph layout serves as an aid to users for retrieving documents. An online archive containing the contents of D-Lib Magazine from July 1995 to May 2002 was used to test the utility of an implemented retrieval and visualization system. We believe that the method developed and tested can be applied to many different domains to help users get a better understanding of online document collections and to minimize users' cognitive load during execution of search tasks. Over the past few years, the volume of information available through the World Wide Web has been expanding exponentially. Never has so much information been so readily available and shared among so many people. Unfortunately, the unstructured nature and huge volume of information accessible over networks have made it hard for users to sift through and find relevant information. To deal with this problem, information retrieval (IR) techniques have gained more intensive attention from both industrial and academic researchers. Numerous IR techniques have been developed to help deal with the information overload problem. These techniques concentrate on mathematical models and algorithms for retrieval. Popular IR models such as the Boolean model, the vector-space model, the probabilistic model and their variants are well established.
  9. Braun, S.: Manifold: a custom analytics platform to visualize research impact (2015) 0.00
    0.0014893063 = product of:
      0.022339594 = sum of:
        0.022339594 = weight(_text_:web in 2906) [ClassicSimilarity], result of:
          0.022339594 = score(doc=2906,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.21634221 = fieldWeight in 2906, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2906)
      0.06666667 = coord(1/15)
    
    Abstract
    The use of research impact metrics and analytics has become an integral component to many aspects of institutional assessment. Many platforms currently exist to provide such analytics, both proprietary and open source; however, the functionality of these systems may not always overlap to serve uniquely specific needs. In this paper, I describe a novel web-based platform, named Manifold, that I built to serve custom research impact assessment needs in the University of Minnesota Medical School. Built on a standard LAMP architecture, Manifold automatically pulls publication data for faculty from Scopus through APIs, calculates impact metrics through automated analytics, and dynamically generates report-like profiles that visualize those metrics. Work on this project has resulted in many lessons learned about challenges to sustainability and scalability in developing a system of such magnitude.
  10. Visual thesaurus (2005) 0.00
    0.0014671701 = product of:
      0.02200755 = sum of:
        0.02200755 = weight(_text_:software in 1292) [ClassicSimilarity], result of:
          0.02200755 = score(doc=1292,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.17532499 = fieldWeight in 1292, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=1292)
      0.06666667 = coord(1/15)
    
    Content
    Traditional print reference guides often have two methods of finding information: an order (alphabetical for dictionaries and encyclopedias, by subject hierarchy in the case of thesauri) and indices (ordered lists, with a more complete listing of words and concepts, which refers back to original content from the main body of the book). A user of such traditional print reference guides who is looking for information will either browse through the ordered information in the main body of the reference book, or scan through the indices to find what is necessary. The advent of the computer allows for much more rapid electronic searches of the same information, and for multiple layers of indices. Users can either search through information by entering a keyword, or users can browse through the information through an outline index, which represents the information contained in the main body of the data. There are two traditional user interfaces for such applications. First, the user may type text into a search field and in response, a list of results is returned to the user. The user then selects a returned entry and may page through the resulting information. Alternatively, the user may choose from a list of words from an index. For example, software thesaurus applications, in which a user attempts to find synonyms, antonyms, homonyms, etc. for a selected word, are usually implemented using the conventional search and presentation techniques discussed above. The presentation of results only allows for a one-dimensional order of data at any one time. In addition, only a limited number of results can be shown at once, and selecting a result inevitably leads to another page-if the result is not satisfactory, the users must search again. Finally, it is difficult to present information about the manner in which the search results are related, or to present quantitative information about the results without causing confusion. Therefore, there exists a need for a multidimensional graphical display of information, in particular with respect to information relating to the meaning of words and their relationships to other words. There further exists a need to present large amounts of information in a way that can be manipulated by the user, without the user losing his place. And there exists a need for more fluid, intuitive and powerful thesaurus functionality that invites the exploration of language.
  11. Seeliger, F.: ¬A tool for systematic visualization of controlled descriptors and their relation to others as a rich context for a discovery system (2015) 0.00
    0.0014671701 = product of:
      0.02200755 = sum of:
        0.02200755 = weight(_text_:software in 2547) [ClassicSimilarity], result of:
          0.02200755 = score(doc=2547,freq=2.0), product of:
            0.12552431 = queryWeight, product of:
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.031640913 = queryNorm
            0.17532499 = fieldWeight in 2547, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9671519 = idf(docFreq=2274, maxDocs=44218)
              0.03125 = fieldNorm(doc=2547)
      0.06666667 = coord(1/15)
    
    Abstract
    The discovery service (a search engine and service called WILBERT) used at our library at the Technical University of Applied Sciences Wildau (TUAS Wildau) is comprised of more than 8 million items. If we were to record all licensed publications in this tool to a higher level of articles, including their bibliographic records and full texts, we would have a holding estimated at a hundred million documents. A lot of features, such as ranking, autocompletion, multi-faceted classification, refining opportunities reduce the number of hits. However, it is not enough to give intuitive support for a systematic overview of topics related to documents in the library. John Naisbitt once said: "We are drowning in information, but starving for knowledge." This quote is still very true today. Two years ago, we started to develop micro thesauri for MINT topics in order to develop an advanced indexing of the library stock. We use iQvoc as a vocabulary management system to create the thesaurus. It provides an easy-to-use browser interface that builds a SKOS thesaurus in the background. The purpose of this is to integrate the thesauri in WILBERT in order to offer a better subject-related search. This approach especially supports first-year students by giving them the possibility to browse through a hierarchical alignment of a subject, for instance, logistics or computer science, and thereby discover how the terms are related. It also supports the students with an insight into established abbreviations and alternative labels. Students at the TUAS Wildau were involved in the developmental process of the software regarding the interface and functionality of iQvoc. The first steps have been taken and involve the inclusion of 3000 terms in our discovery tool WILBERT.
  12. Xiaoyue M.; Cahier, J.-P.: Iconic categorization with knowledge-based "icon systems" can improve collaborative KM (2011) 0.00
    0.0012410887 = product of:
      0.01861633 = sum of:
        0.01861633 = weight(_text_:web in 4837) [ClassicSimilarity], result of:
          0.01861633 = score(doc=4837,freq=2.0), product of:
            0.10326045 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.031640913 = queryNorm
            0.18028519 = fieldWeight in 4837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4837)
      0.06666667 = coord(1/15)
    
    Abstract
    Icon system could represent an efficient solution for collective iconic categorization of knowledge by providing graphical interpretation. Their pictorial characters assist visualizing the structure of text to become more understandable beyond vocabulary obstacle. In this paper we are proposing a Knowledge Engineering (KM) based iconic representation approach. We assume that these systematic icons improve collective knowledge management. Meanwhile, text (constructed under our knowledge management model - Hypertopic) helps to reduce the diversity of graphical understanding belonging to different users. This "position paper" also prepares to demonstrate our hypothesis by an "iconic social tagging" experiment which is to be accomplished in 2011 with UTT students. We describe the "socio semantic web" information portal involved in this project, and a part of the icons already designed for this experiment in Sustainability field. We have reviewed existing theoretical works on icons from various origins, which can be used to lay the foundation of robust "icons systems".
  13. Slavic, A.: Interface to classification : some objectives and options (2006) 0.00
    6.439812E-4 = product of:
      0.009659718 = sum of:
        0.009659718 = product of:
          0.019319436 = sum of:
            0.019319436 = weight(_text_:online in 2131) [ClassicSimilarity], result of:
              0.019319436 = score(doc=2131,freq=2.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.20118743 = fieldWeight in 2131, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2131)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  14. Beagle, D.: Visualizing keyword distribution across multidisciplinary c-space (2003) 0.00
    4.553635E-4 = product of:
      0.0068304525 = sum of:
        0.0068304525 = product of:
          0.013660905 = sum of:
            0.013660905 = weight(_text_:online in 1202) [ClassicSimilarity], result of:
              0.013660905 = score(doc=1202,freq=4.0), product of:
                0.096027054 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.031640913 = queryNorm
                0.142261 = fieldWeight in 1202, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1202)
          0.5 = coord(1/2)
      0.06666667 = coord(1/15)
    
    Abstract
    But what happens to this awareness in a digital library? Can discursive formations be represented in cyberspace, perhaps through diagrams in a visualization interface? And would such a schema be helpful to a digital library user? To approach this question, it is worth taking a moment to reconsider what Radford is looking at. First, he looks at titles to see how the books cluster. To illustrate, I scanned one hundred books on the shelves of a college library under subclass HT 101-395, defined by the LCC subclass caption as Urban groups. The City. Urban sociology. Of the first 100 titles in this sequence, fifty included the word "urban" or variants (e.g. "urbanization"). Another thirty-five used the word "city" or variants. These keywords appear to mark their titles as the heart of this discursive formation. The scattering of titles not using "urban" or "city" used related terms such as "town," "community," or in one case "skyscrapers." So we immediately see some empirical correlation between keywords and classification. But we also see a problem with the commonly used search technique of title-keyword. A student interested in urban studies will want to know about this entire subclass, and may wish to browse every title available therein. A title-keyword search on "urban" will retrieve only half of the titles, while a search on "city" will retrieve just over a third. There will be no overlap, since no titles in this sample contain both words. The only place where both words appear in a common string is in the LCC subclass caption, but captions are not typically indexed in library Online Public Access Catalogs (OPACs). In a traditional library, this problem is mitigated when the student goes to the shelf looking for any one of the books and suddenly discovers a much wider selection than the keyword search had led him to expect. But in a digital library, the issue of non-retrieval can be more problematic, as studies have indicated. Micco and Popp reported that, in a study funded partly by the U.S. Department of Education, 65 of 73 unskilled users searching for material on U.S./Soviet foreign relations found some material but never realized they had missed a large percentage of what was in the database.
    Theme
    Klassifikationssysteme im Online-Retrieval