Search (71 results, page 1 of 4)

  • × theme_ss:"Visualisierung"
  • × type_ss:"a"
  1. Moya-Anegón, F. de; Vargas-Quesada, B.; Chinchilla-Rodríguez, Z.; Corera-Álvarez, E.; Munoz-Fernández, F.J.; Herrero-Solana, V.; SCImago Group: Visualizing the marrow of science (2007) 0.04
    0.035167206 = product of:
      0.08791801 = sum of:
        0.03148376 = weight(_text_:technology in 1313) [ClassicSimilarity], result of:
          0.03148376 = score(doc=1313,freq=2.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.23034787 = fieldWeight in 1313, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1313)
        0.056434255 = weight(_text_:social in 1313) [ClassicSimilarity], result of:
          0.056434255 = score(doc=1313,freq=2.0), product of:
            0.18299131 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.04589033 = queryNorm
            0.30839854 = fieldWeight in 1313, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1313)
      0.4 = coord(2/5)
    
    Abstract
    This study proposes a new methodology that allows for the generation of scientograms of major scientific domains, constructed on the basis of cocitation of Institute of Scientific Information categories, and pruned using PathfinderNetwork, with a layout determined by algorithms of the spring-embedder type (Kamada-Kawai), then corroborated structurally by factor analysis. We present the complete scientogram of the world for the Year 2002. It integrates the natural sciences, the social sciences, and arts and humanities. Its basic structure and the essential relationships therein are revealed, allowing us to simultaneously analyze the macrostructure, microstructure, and marrow of worldwide scientific output.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.14, S.2167-2179
  2. Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F.: Science mapping software tools : review, analysis, and cooperative study among tools (2011) 0.03
    0.030965447 = product of:
      0.07741362 = sum of:
        0.035981435 = weight(_text_:technology in 4486) [ClassicSimilarity], result of:
          0.035981435 = score(doc=4486,freq=2.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.2632547 = fieldWeight in 4486, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.0625 = fieldNorm(doc=4486)
        0.04143218 = product of:
          0.08286436 = sum of:
            0.08286436 = weight(_text_:aspects in 4486) [ClassicSimilarity], result of:
              0.08286436 = score(doc=4486,freq=2.0), product of:
                0.20741826 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.04589033 = queryNorm
                0.39950368 = fieldWeight in 4486, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4486)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Science mapping aims to build bibliometric maps that describe how specific disciplines, scientific domains, or research fields are conceptually, intellectually, and socially structured. Different techniques and software tools have been proposed to carry out science mapping analysis. The aim of this article is to review, analyze, and compare some of these software tools, taking into account aspects such as the bibliometric techniques available and the different kinds of analysis.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.7, S.1382-1402
  3. Leydesdorff, L.: Visualization of the citation impact environments of scientific journals : an online mapping exercise (2007) 0.03
    0.02511943 = product of:
      0.062798575 = sum of:
        0.022488397 = weight(_text_:technology in 82) [ClassicSimilarity], result of:
          0.022488397 = score(doc=82,freq=2.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.16453418 = fieldWeight in 82, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.0390625 = fieldNorm(doc=82)
        0.04031018 = weight(_text_:social in 82) [ClassicSimilarity], result of:
          0.04031018 = score(doc=82,freq=2.0), product of:
            0.18299131 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.04589033 = queryNorm
            0.22028469 = fieldWeight in 82, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=82)
      0.4 = coord(2/5)
    
    Abstract
    Aggregated journal-journal citation networks based on the Journal Citation Reports 2004 of the Science Citation Index (5,968 journals) and the Social Science Citation Index (1,712 journals) are made accessible from the perspective of any of these journals. A vector-space model Is used for normalization, and the results are brought online at http://www.leydesdorff.net/jcr04 as input files for the visualization program Pajek. The user is thus able to analyze the citation environment in terms of links and graphs. Furthermore, the local impact of a journal is defined as its share of the total citations in the specific journal's citation environments; the vertical size of the nodes is varied proportionally to this citation impact. The horizontal size of each node can be used to provide the same information after correction for within-journal (self-)citations. In the "citing" environment, the equivalents of this measure can be considered as a citation activity index which maps how the relevant journal environment is perceived by the collective of authors of a given journal. As a policy application, the mechanism of Interdisciplinary developments among the sciences is elaborated for the case of nanotechnology journals.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.1, S.25-38
  4. Lin, F.-T.: Drawing a knowledge map of smart city knowledge in academia (2019) 0.03
    0.02511943 = product of:
      0.062798575 = sum of:
        0.022488397 = weight(_text_:technology in 5454) [ClassicSimilarity], result of:
          0.022488397 = score(doc=5454,freq=2.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.16453418 = fieldWeight in 5454, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5454)
        0.04031018 = weight(_text_:social in 5454) [ClassicSimilarity], result of:
          0.04031018 = score(doc=5454,freq=2.0), product of:
            0.18299131 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.04589033 = queryNorm
            0.22028469 = fieldWeight in 5454, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5454)
      0.4 = coord(2/5)
    
    Abstract
    This research takes the academic articles in the Web of Science's core collection database as a corpus to draw a series of knowledge maps, to explore the relationships, connectivity, dis-tribution, and evolution among their keywords with respect to smart cities in the last decade. Beyond just drawing a text cloud or measuring their sizes, we further explore their texture by iden-tifying the hottest keywords in academic articles, construct links between and among them that share common keywords, identify islands, rocks, reefs that are formed by connected articles-a metaphor inspired by Ong et al. (2005)-and analyze trends in their evolution. We found the following phenomena: 1) "Internet of Things" is the most frequently mentioned keyword in recent research articles; 2) the numbers of islands and reefs are increas-ing; 3) the evolutions of the numbers of weighted links have frac-tal-like structure; and, 4) the coverage of the largest rock, formed by articles that share a common keyword, in the largest island is converging into around 10% to 20%. These phenomena imply that a common interest in the technology of smart cities has been emerging among researchers. However, the administrative, social, economic, and cultural issues need more attention in academia in the future.
  5. Cao, N.; Sun, J.; Lin, Y.-R.; Gotz, D.; Liu, S.; Qu, H.: FacetAtlas : Multifaceted visualization for rich text corpora (2010) 0.02
    0.019353405 = product of:
      0.04838351 = sum of:
        0.022488397 = weight(_text_:technology in 3366) [ClassicSimilarity], result of:
          0.022488397 = score(doc=3366,freq=2.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.16453418 = fieldWeight in 3366, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3366)
        0.025895113 = product of:
          0.051790226 = sum of:
            0.051790226 = weight(_text_:aspects in 3366) [ClassicSimilarity], result of:
              0.051790226 = score(doc=3366,freq=2.0), product of:
                0.20741826 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.04589033 = queryNorm
                0.2496898 = fieldWeight in 3366, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3366)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Documents in rich text corpora usually contain multiple facets of information. For example, an article about a specific disease often consists of different facets such as symptom, treatment, cause, diagnosis, prognosis, and prevention. Thus, documents may have different relations based on different facets. Powerful search tools have been developed to help users locate lists of individual documents that are most related to specific keywords. However, there is a lack of effective analysis tools that reveal the multifaceted relations of documents within or cross the document clusters. In this paper, we present FacetAtlas, a multifaceted visualization technique for visually analyzing rich text corpora. FacetAtlas combines search technology with advanced visual analytical tools to convey both global and local patterns simultaneously. We describe several unique aspects of FacetAtlas, including (1) node cliques and multifaceted edges, (2) an optimized density map, and (3) automated opacity pattern enhancement for highlighting visual patterns, (4) interactive context switch between facets. In addition, we demonstrate the power of FacetAtlas through a case study that targets patient education in the health care domain. Our evaluation shows the benefits of this work, especially in support of complex multifaceted data analysis.
  6. Minkov, E.; Kahanov, K.; Kuflik, T.: Graph-based recommendation integrating rating history and domain knowledge : application to on-site guidance of museum visitors (2017) 0.02
    0.019353405 = product of:
      0.04838351 = sum of:
        0.022488397 = weight(_text_:technology in 3756) [ClassicSimilarity], result of:
          0.022488397 = score(doc=3756,freq=2.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.16453418 = fieldWeight in 3756, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3756)
        0.025895113 = product of:
          0.051790226 = sum of:
            0.051790226 = weight(_text_:aspects in 3756) [ClassicSimilarity], result of:
              0.051790226 = score(doc=3756,freq=2.0), product of:
                0.20741826 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.04589033 = queryNorm
                0.2496898 = fieldWeight in 3756, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3756)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Visitors to museums and other cultural heritage sites encounter a wealth of exhibits in a variety of subject areas, but can explore only a small number of them. Moreover, there typically exists rich complementary information that can be delivered to the visitor about exhibits of interest, but only a fraction of this information can be consumed during the limited time of the visit. Recommender systems may help visitors to cope with this information overload. Ideally, the recommender system of choice should model user preferences, as well as background knowledge about the museum's environment, considering aspects of physical and thematic relevancy. We propose a personalized graph-based recommender framework, representing rating history and background multi-facet information jointly as a relational graph. A random walk measure is applied to rank available complementary multimedia presentations by their relevancy to a visitor's profile, integrating the various dimensions. We report the results of experiments conducted using authentic data collected at the Hecht museum. An evaluation of multiple graph variants, compared with several popular and state-of-the-art recommendation methods, indicates on advantages of the graph-based approach.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.8, S.1911-1924
  7. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.02
    0.018255439 = product of:
      0.045638595 = sum of:
        0.026986076 = weight(_text_:technology in 3693) [ClassicSimilarity], result of:
          0.026986076 = score(doc=3693,freq=2.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.19744103 = fieldWeight in 3693, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.046875 = fieldNorm(doc=3693)
        0.01865252 = product of:
          0.03730504 = sum of:
            0.03730504 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.03730504 = score(doc=3693,freq=2.0), product of:
                0.16070013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04589033 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Generally, Computer Science (CS) classifications are inconsistent in taxonomy strategies. t is necessary to develop CS taxonomy research to combine its historical perspective, its current knowledge and its predicted future trends - including all breakthroughs in information and communication technology. In this paper we have analyzed the ACM Computing Classification System (CCS) by means of visualization maps. The important achievement of current work is an effective visualization of classified documents from the ACM Digital Library. From the technical point of view, the innovation lies in the parallel use of analysis units: (sub)classes and keywords as well as a spherical 3D information surface. We have compared both the thematic and semantic maps of classified documents and results presented in Table 1. Furthermore, the proposed new method is used for content-related evaluation of the original scheme. Summing up: we improved an original ACM classification in the Computer Science domain by means of visualization.
    Date
    22. 7.2010 19:36:46
  8. Zhu, B.; Chen, H.: Information visualization (2004) 0.02
    0.018156925 = product of:
      0.045392312 = sum of:
        0.027265735 = weight(_text_:technology in 4276) [ClassicSimilarity], result of:
          0.027265735 = score(doc=4276,freq=6.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.19948712 = fieldWeight in 4276, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.02734375 = fieldNorm(doc=4276)
        0.018126579 = product of:
          0.036253158 = sum of:
            0.036253158 = weight(_text_:aspects in 4276) [ClassicSimilarity], result of:
              0.036253158 = score(doc=4276,freq=2.0), product of:
                0.20741826 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.04589033 = queryNorm
                0.17478286 = fieldWeight in 4276, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4276)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Advanced technology has resulted in the generation of about one million terabytes of information every year. Ninety-reine percent of this is available in digital format (Keim, 2001). More information will be generated in the next three years than was created during all of previous human history (Keim, 2001). Collecting information is no longer a problem, but extracting value from information collections has become progressively more difficult. Various search engines have been developed to make it easier to locate information of interest, but these work well only for a person who has a specific goal and who understands what and how information is stored. This usually is not the Gase. Visualization was commonly thought of in terms of representing human mental processes (MacEachren, 1991; Miller, 1984). The concept is now associated with the amplification of these mental processes (Card, Mackinlay, & Shneiderman, 1999). Human eyes can process visual cues rapidly, whereas advanced information analysis techniques transform the computer into a powerful means of managing digitized information. Visualization offers a link between these two potent systems, the human eye and the computer (Gershon, Eick, & Card, 1998), helping to identify patterns and to extract insights from large amounts of information. The identification of patterns is important because it may lead to a scientific discovery, an interpretation of clues to solve a crime, the prediction of catastrophic weather, a successful financial investment, or a better understanding of human behavior in a computermediated environment. Visualization technology shows considerable promise for increasing the value of large-scale collections of information, as evidenced by several commercial applications of TreeMap (e.g., http://www.smartmoney.com) and Hyperbolic tree (e.g., http://www.inxight.com) to visualize large-scale hierarchical structures. Although the proliferation of visualization technologies dates from the 1990s where sophisticated hardware and software made increasingly faster generation of graphical objects possible, the role of visual aids in facilitating the construction of mental images has a long history. Visualization has been used to communicate ideas, to monitor trends implicit in data, and to explore large volumes of data for hypothesis generation. Imagine traveling to a strange place without a map, having to memorize physical and chemical properties of an element without Mendeleyev's periodic table, trying to understand the stock market without statistical diagrams, or browsing a collection of documents without interactive visual aids. A collection of information can lose its value simply because of the effort required for exhaustive exploration. Such frustrations can be overcome by visualization.
    Visualization can be classified as scientific visualization, software visualization, or information visualization. Although the data differ, the underlying techniques have much in common. They use the same elements (visual cues) and follow the same rules of combining visual cues to deliver patterns. They all involve understanding human perception (Encarnacao, Foley, Bryson, & Feiner, 1994) and require domain knowledge (Tufte, 1990). Because most decisions are based an unstructured information, such as text documents, Web pages, or e-mail messages, this chapter focuses an the visualization of unstructured textual documents. The chapter reviews information visualization techniques developed over the last decade and examines how they have been applied in different domains. The first section provides the background by describing visualization history and giving overviews of scientific, software, and information visualization as well as the perceptual aspects of visualization. The next section assesses important visualization techniques that convert abstract information into visual objects and facilitate navigation through displays an a computer screen. It also explores information analysis algorithms that can be applied to identify or extract salient visualizable structures from collections of information. Information visualization systems that integrate different types of technologies to address problems in different domains are then surveyed; and we move an to a survey and critique of visualization system evaluation studies. The chapter concludes with a summary and identification of future research directions.
    Source
    Annual review of information science and technology. 39(2005), S.139-177
  9. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.02
    0.017873263 = product of:
      0.04468316 = sum of:
        0.032248147 = weight(_text_:social in 2659) [ClassicSimilarity], result of:
          0.032248147 = score(doc=2659,freq=2.0), product of:
            0.18299131 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.04589033 = queryNorm
            0.17622775 = fieldWeight in 2659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.03125 = fieldNorm(doc=2659)
        0.012435013 = product of:
          0.024870027 = sum of:
            0.024870027 = weight(_text_:22 in 2659) [ClassicSimilarity], result of:
              0.024870027 = score(doc=2659,freq=2.0), product of:
                0.16070013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04589033 = queryNorm
                0.15476047 = fieldWeight in 2659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2659)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  10. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.02
    0.015212866 = product of:
      0.038032163 = sum of:
        0.022488397 = weight(_text_:technology in 5272) [ClassicSimilarity], result of:
          0.022488397 = score(doc=5272,freq=2.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.16453418 = fieldWeight in 5272, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5272)
        0.015543767 = product of:
          0.031087535 = sum of:
            0.031087535 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
              0.031087535 = score(doc=5272,freq=2.0), product of:
                0.16070013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04589033 = queryNorm
                0.19345059 = fieldWeight in 5272, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5272)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 7.2006 16:11:05
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.359-377
  11. Wu, I.-C.; Vakkari, P.: Effects of subject-oriented visualization tools on search by novices and intermediates (2018) 0.02
    0.015212866 = product of:
      0.038032163 = sum of:
        0.022488397 = weight(_text_:technology in 4573) [ClassicSimilarity], result of:
          0.022488397 = score(doc=4573,freq=2.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.16453418 = fieldWeight in 4573, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4573)
        0.015543767 = product of:
          0.031087535 = sum of:
            0.031087535 = weight(_text_:22 in 4573) [ClassicSimilarity], result of:
              0.031087535 = score(doc=4573,freq=2.0), product of:
                0.16070013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04589033 = queryNorm
                0.19345059 = fieldWeight in 4573, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4573)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    9.12.2018 16:22:25
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.12, S.1428-1445
  12. Zhang, J.; Zhao, Y.: ¬A user term visualization analysis based on a social question and answer log (2013) 0.01
    0.013963858 = product of:
      0.06981929 = sum of:
        0.06981929 = weight(_text_:social in 2715) [ClassicSimilarity], result of:
          0.06981929 = score(doc=2715,freq=6.0), product of:
            0.18299131 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.04589033 = queryNorm
            0.3815443 = fieldWeight in 2715, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2715)
      0.2 = coord(1/5)
    
    Abstract
    The authors of this paper investigate terms of consumers' diabetes based on a log from the Yahoo!Answers social question and answers (Q&A) forum, ascertain characteristics and relationships among terms related to diabetes from the consumers' perspective, and reveal users' diabetes information seeking patterns. In this study, the log analysis method, data coding method, and visualization multiple-dimensional scaling analysis method were used for analysis. The visual analyses were conducted at two levels: terms analysis within a category and category analysis among the categories in the schema. The findings show that the average number of words per question was 128.63, the average number of sentences per question was 8.23, the average number of words per response was 254.83, and the average number of sentences per response was 16.01. There were 12 categories (Cause & Pathophysiology, Sign & Symptom, Diagnosis & Test, Organ & Body Part, Complication & Related Disease, Medication, Treatment, Education & Info Resource, Affect, Social & Culture, Lifestyle, and Nutrient) in the diabetes related schema which emerged from the data coding analysis. The analyses at the two levels show that terms and categories were clustered and patterns were revealed. Future research directions are also included.
  13. Tscherteu, G.; Langreiter, C.: Explorative Netzwerkanalyse im Living Web (2009) 0.01
    0.012899259 = product of:
      0.06449629 = sum of:
        0.06449629 = weight(_text_:social in 4870) [ClassicSimilarity], result of:
          0.06449629 = score(doc=4870,freq=2.0), product of:
            0.18299131 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.04589033 = queryNorm
            0.3524555 = fieldWeight in 4870, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0625 = fieldNorm(doc=4870)
      0.2 = coord(1/5)
    
    Source
    Social Semantic Web: Web 2.0, was nun? Hrsg.: A. Blumauer u. T. Pellegrini
  14. Wu, Y.; Bai, R.: ¬An event relationship model for knowledge organization and visualization (2017) 0.01
    0.009674444 = product of:
      0.04837222 = sum of:
        0.04837222 = weight(_text_:social in 3867) [ClassicSimilarity], result of:
          0.04837222 = score(doc=3867,freq=2.0), product of:
            0.18299131 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.04589033 = queryNorm
            0.26434162 = fieldWeight in 3867, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.046875 = fieldNorm(doc=3867)
      0.2 = coord(1/5)
    
    Abstract
    An event is a specific occurrence involving participants, which is a typed, n-ary association of entities or other events, each identified as a participant in a specific semantic role in the event (Pyysalo et al. 2012; Linguistic Data Consortium 2005). Event types may vary across domains. Representing relationships between events can facilitate the understanding of knowledge in complex systems (such as economic systems, human body, social systems). In the simplest form, an event can be represented as Entity A <Relation> Entity B. This paper evaluates several knowledge organization and visualization models and tools, such as concept maps (Cmap), topic maps (Ontopia), network analysis models (Gephi), and ontology (Protégé), then proposes an event relationship model that aims to integrate the strengths of these models, and can represent complex knowledge expressed in events and their relationships.
  15. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.01
    0.009127719 = product of:
      0.022819297 = sum of:
        0.013493038 = weight(_text_:technology in 3035) [ClassicSimilarity], result of:
          0.013493038 = score(doc=3035,freq=2.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.09872051 = fieldWeight in 3035, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3035)
        0.00932626 = product of:
          0.01865252 = sum of:
            0.01865252 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
              0.01865252 = score(doc=3035,freq=2.0), product of:
                0.16070013 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04589033 = queryNorm
                0.116070345 = fieldWeight in 3035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3035)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
    Footnote
    Vgl.: http://www.economist.com/news/science-and-technology/21700620-surprisingly-simple-test-check-research-papers-errors-come-again.
  16. Yan, B.; Luo, J.: Filtering patent maps for visualization of diversification paths of inventors and organizations (2017) 0.01
    0.008995359 = product of:
      0.044976793 = sum of:
        0.044976793 = weight(_text_:technology in 3651) [ClassicSimilarity], result of:
          0.044976793 = score(doc=3651,freq=8.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.32906836 = fieldWeight in 3651, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3651)
      0.2 = coord(1/5)
    
    Abstract
    In the information science literature, recent studies have used patent databases and patent classification information to construct network maps of patent technology classes. In such a patent technology map, almost all pairs of technology classes are connected, whereas most of the connections between them are extremely weak. This observation suggests the possibility of filtering the patent network map by removing weak links. However, removing links may reduce the explanatory power of the network on inventor or organization diversification. The network links may explain the patent portfolio diversification paths of inventors and inventing organizations. We measure the diversification explanatory power of the patent network map, and present a method to objectively choose an optimal tradeoff between explanatory power and removing weak links. We show that this method can remove a degree of arbitrariness compared with previous filtering methods based on arbitrary thresholds, and also identify previous filtering methods that created filters outside the optimal tradeoff. The filtered map aims to aid in network visualization analyses of the technological diversification of inventors, organizations, and other innovation agents, and potential foresight analysis. Such applications to a prolific inventor (Leonard Forbes) and company (Google) are demonstrated.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.6, S.1551-1563
  17. Ross, A.: Screen-based information design : Eforms (2000) 0.01
    0.008904952 = product of:
      0.044524755 = sum of:
        0.044524755 = weight(_text_:technology in 712) [ClassicSimilarity], result of:
          0.044524755 = score(doc=712,freq=4.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.32576108 = fieldWeight in 712, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.0546875 = fieldNorm(doc=712)
      0.2 = coord(1/5)
    
    Abstract
    The study surveys and delineates the processes involved in screen-based information design. This is specifically in relation to the creation of electronic forms and from this offers a guide to their production. The study also examines the design and technological issues associated with the transfer, or translation, of the printed form to the computer screen. How an Eform might be made more visually engaging without detracting from the information relevant to the form's navigation and completion. Also, the interaction between technology and (document) structure where technology can eliminate or reduce traditional structural problems through the application of non-linear strategies. It reviews the potential solutions of incorporating improved functionality through interactivity.
  18. Denton, W.: On dentographs, a new method of visualizing library collections (2012) 0.01
    0.008286436 = product of:
      0.04143218 = sum of:
        0.04143218 = product of:
          0.08286436 = sum of:
            0.08286436 = weight(_text_:aspects in 580) [ClassicSimilarity], result of:
              0.08286436 = score(doc=580,freq=2.0), product of:
                0.20741826 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.04589033 = queryNorm
                0.39950368 = fieldWeight in 580, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0625 = fieldNorm(doc=580)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    A dentograph is a visualization of a library's collection built on the idea that a classification scheme is a mathematical function mapping one set of things (books or the universe of knowledge) onto another (a set of numbers and letters). Dentographs can visualize aspects of just one collection or can be used to compare two or more collections. This article describes how to build them, with examples and code using Ruby and R, and discusses some problems and future directions.
  19. Xiaoyue M.; Cahier, J.-P.: Iconic categorization with knowledge-based "icon systems" can improve collaborative KM (2011) 0.01
    0.008062037 = product of:
      0.04031018 = sum of:
        0.04031018 = weight(_text_:social in 4837) [ClassicSimilarity], result of:
          0.04031018 = score(doc=4837,freq=2.0), product of:
            0.18299131 = queryWeight, product of:
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.04589033 = queryNorm
            0.22028469 = fieldWeight in 4837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9875789 = idf(docFreq=2228, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4837)
      0.2 = coord(1/5)
    
    Abstract
    Icon system could represent an efficient solution for collective iconic categorization of knowledge by providing graphical interpretation. Their pictorial characters assist visualizing the structure of text to become more understandable beyond vocabulary obstacle. In this paper we are proposing a Knowledge Engineering (KM) based iconic representation approach. We assume that these systematic icons improve collective knowledge management. Meanwhile, text (constructed under our knowledge management model - Hypertopic) helps to reduce the diversity of graphical understanding belonging to different users. This "position paper" also prepares to demonstrate our hypothesis by an "iconic social tagging" experiment which is to be accomplished in 2011 with UTT students. We describe the "socio semantic web" information portal involved in this project, and a part of the icons already designed for this experiment in Sustainability field. We have reviewed existing theoretical works on icons from various origins, which can be used to lay the foundation of robust "icons systems".
  20. Rafols, I.; Porter, A.L.; Leydesdorff, L.: Science overlay maps : a new tool for research policy and library management (2010) 0.01
    0.0076328157 = product of:
      0.03816408 = sum of:
        0.03816408 = weight(_text_:technology in 3987) [ClassicSimilarity], result of:
          0.03816408 = score(doc=3987,freq=4.0), product of:
            0.13667917 = queryWeight, product of:
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.04589033 = queryNorm
            0.2792238 = fieldWeight in 3987, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.978387 = idf(docFreq=6114, maxDocs=44218)
              0.046875 = fieldNorm(doc=3987)
      0.2 = coord(1/5)
    
    Abstract
    We present a novel approach to visually locate bodies of research within the sciences, both at each moment of time and dynamically. This article describes how this approach fits with other efforts to locally and globally map scientific outputs. We then show how these science overlay maps help benchmarking, explore collaborations, and track temporal changes, using examples of universities, corporations, funding agencies, and research topics. We address their conditions of application and discuss advantages, downsides, and limitations. Overlay maps especially help investigate the increasing number of scientific developments and organizations that do not fit within traditional disciplinary categories. We make these tools available online to enable researchers to explore the ongoing sociocognitive transformations of science and technology systems.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.9, S.1871-1887

Years

Languages

  • e 69
  • a 1
  • d 1
  • More… Less…

Types