Search (175 results, page 1 of 9)

  • × theme_ss:"Visualisierung"
  1. Spero, S.: LCSH is to thesaurus as doorbell is to mammal : visualizing structural problems in the Library of Congress Subject Headings (2008) 0.04
    0.041216135 = product of:
      0.08243227 = sum of:
        0.010546046 = weight(_text_:in in 2659) [ClassicSimilarity], result of:
          0.010546046 = score(doc=2659,freq=18.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.18034597 = fieldWeight in 2659, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2659)
        0.060237244 = weight(_text_:great in 2659) [ClassicSimilarity], result of:
          0.060237244 = score(doc=2659,freq=2.0), product of:
            0.24206476 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042989567 = queryNorm
            0.24884763 = fieldWeight in 2659, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=2659)
        0.011648986 = product of:
          0.023297971 = sum of:
            0.023297971 = weight(_text_:22 in 2659) [ClassicSimilarity], result of:
              0.023297971 = score(doc=2659,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.15476047 = fieldWeight in 2659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2659)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    The Library of Congress Subject Headings (LCSH) has been developed over the course of more than a century, predating the semantic web by some time. Until the 1986, the only concept-toconcept relationship available was an undifferentiated "See Also" reference, which was used for both associative (RT) and hierarchical (BT/NT) connections. In that year, in preparation for the first release of the headings in machine readable MARC Authorities form, an attempt was made to automatically convert these "See Also" links into the standardized thesaural relations. Unfortunately, the rule used to determine the type of reference to generate relied on the presence of symmetric links to detect associatively related terms; "See Also" references that were only present in one of the related terms were assumed to be hierarchical. This left the process vulnerable to inconsistent use of references in the pre-conversion data, with a marked bias towards promoting relationships to hierarchical status. The Library of Congress was aware that the results of the conversion contained many inconsistencies, and intended to validate and correct the results over the course of time. Unfortunately, twenty years later, less than 40% of the converted records have been evaluated. The converted records, being the earliest encountered during the Library's cataloging activities, represent the most basic concepts within LCSH; errors in the syndetic structure for these records affect far more subordinate concepts than those nearer the periphery. Worse, a policy of patterning new headings after pre-existing ones leads to structural errors arising from the conversion process being replicated in these newer headings, perpetuating and exacerbating the errors. As the LCSH prepares for its second great conversion, from MARC to SKOS, it is critical to address these structural problems. As part of the work on converting the headings into SKOS, I have experimented with different visualizations of the tangled web of broader terms embedded in LCSH. This poster illustrates several of these renderings, shows how they can help users to judge which relationships might not be correct, and shows just exactly how Doorbells and Mammals are related.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  2. Julien, C.-A.; Leide, J.E.; Bouthillier, F.: Controlled user evaluations of information visualization interfaces for text retrieval : literature review and meta-analysis (2008) 0.03
    0.031876296 = product of:
      0.09562889 = sum of:
        0.005273023 = weight(_text_:in in 1718) [ClassicSimilarity], result of:
          0.005273023 = score(doc=1718,freq=2.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.09017298 = fieldWeight in 1718, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1718)
        0.090355866 = weight(_text_:great in 1718) [ClassicSimilarity], result of:
          0.090355866 = score(doc=1718,freq=2.0), product of:
            0.24206476 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042989567 = queryNorm
            0.37327147 = fieldWeight in 1718, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=1718)
      0.33333334 = coord(2/6)
    
    Abstract
    This review describes experimental designs (users, search tasks, measures, etc.) used by 31 controlled user studies of information visualization (IV) tools for textual information retrieval (IR) and a meta-analysis of the reported statistical effects. Comparable experimental designs allow research designers to compare their results with other reports, and support the development of experimentally verified design guidelines concerning which IV techniques are better suited to which types of IR tasks. The studies generally use a within-subject design with 15 or more undergraduate students performing browsing to known-item tasks on sets of at least 1,000 full-text articles or Web pages on topics of general interest/news. Results of the meta-analysis (N = 8) showed no significant effects of the IV tool as compared with a text-only equivalent, but the set shows great variability suggesting an inadequate basis of comparison. Experimental design recommendations are provided which would support comparison of existing IV tools for IR usability testing.
    Series
    In-depth review
  3. Osiñska, V.: Visual analysis of classification scheme (2010) 0.03
    0.028374083 = product of:
      0.08512225 = sum of:
        0.0098257 = weight(_text_:in in 4068) [ClassicSimilarity], result of:
          0.0098257 = score(doc=4068,freq=10.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.16802745 = fieldWeight in 4068, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4068)
        0.07529655 = weight(_text_:great in 4068) [ClassicSimilarity], result of:
          0.07529655 = score(doc=4068,freq=2.0), product of:
            0.24206476 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042989567 = queryNorm
            0.31105953 = fieldWeight in 4068, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4068)
      0.33333334 = coord(2/6)
    
    Abstract
    This paper proposes a novel methodology to visualize a classification scheme. It is demonstrated with the Association for Computing Machinery (ACM) Computing Classification System (CCS). The collection derived from the ACM digital library, containing 37,543 documents classified by CCS. The assigned classes, subject descriptors, and keywords were processed in a dataset to produce a graphical representation of the documents. The general conception is based on the similarity of co-classes (themes) proportional to the number of common publications. The final number of all possible classes and subclasses in the collection was 353 and therefore the similarity matrix of co-classes had the same dimension. A spherical surface was chosen as the target information space. Classes and documents' node locations on the sphere were obtained by means of Multidimensional Scaling coordinates. By representing the surface on a plane like a map projection, it is possible to analyze the visualization layout. The graphical patterns were organized in some colour clusters. For evaluation of given visualization maps, graphics filtering was applied. This proposed method can be very useful in interdisciplinary research fields. It allows for a great amount of heterogeneous information to be conveyed in a compact display, including topics, relationships among topics, frequency of occurrence, importance and changes of these properties over time.
  4. Rolling, L.: ¬The role of graphic display of concept relationships in indexing and retrieval vocabularies (1985) 0.02
    0.024910469 = product of:
      0.0747314 = sum of:
        0.0144941555 = weight(_text_:in in 3646) [ClassicSimilarity], result of:
          0.0144941555 = score(doc=3646,freq=34.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.24786183 = fieldWeight in 3646, product of:
              5.8309517 = tf(freq=34.0), with freq of:
                34.0 = termFreq=34.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3646)
        0.060237244 = weight(_text_:great in 3646) [ClassicSimilarity], result of:
          0.060237244 = score(doc=3646,freq=2.0), product of:
            0.24206476 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042989567 = queryNorm
            0.24884763 = fieldWeight in 3646, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03125 = fieldNorm(doc=3646)
      0.33333334 = coord(2/6)
    
    Abstract
    The use of diagrams to express relationships in classification is not new. Many classificationists have used this approach, but usually in a minor display to make a point or for part of a difficult relational situation. Ranganathan, for example, used diagrams for some of his more elusive concepts. The thesaurus in particular and subject headings in general, with direct and indirect crossreferences or equivalents, need many more diagrams than normally are included to make relationships and even semantics clear. A picture very often is worth a thousand words. Rolling has used directed graphs (arrowgraphs) to join terms as a practical method for rendering relationships between indexing terms lucid. He has succeeded very weIl in this endeavor. Four diagrams in this selection are all that one needs to explain how to employ the system; from initial listing to completed arrowgraph. The samples of his work include illustration of off-page connectors between arrowgraphs. The great advantage to using diagrams like this is that they present relations between individual terms in a format that is easy to comprehend. But of even greater value is the fact that one can use his arrowgraphs as schematics for making three-dimensional wire-and-ball models, in which the relationships may be seen even more clearly. In fact, errors or gaps in relations are much easier to find with this methodology. One also can get across the notion of the threedimensionality of classification systems with such models. Pettee's "hand reaching up and over" (q.v.) is not a figment of the imagination. While the actual hand is a wire or stick, the concept visualized is helpful in illuminating the three-dimensional figure that is latent in all systems that have cross-references or "broader," "narrower," or, especially, "related" terms. Classification schedules, being hemmed in by the dimensions of the printed page, also benefit from such physical illustrations. Rolling, an engineer by conviction, was the developer of information systems for the Cobalt Institute, the European Atomic Energy Community, and European Coal and Steel Community. He also developed and promoted computer-aided translation at the Commission of the European Communities in Luxembourg. One of his objectives has always been to increase the efficiency of mono- and multilingual thesauri for use in multinational information systems.
    Footnote
    Original in: Classification research: Proceedings of the Second International Study Conference held at Hotel Prins Hamlet, Elsinore, Denmark, 14th-18th Sept. 1964. Ed.: Pauline Atherton. Copenhagen: Munksgaard 1965. S.295-310.
  5. Cao, N.; Sun, J.; Lin, Y.-R.; Gotz, D.; Liu, S.; Qu, H.: FacetAtlas : Multifaceted visualization for rich text corpora (2010) 0.02
    0.021158509 = product of:
      0.06347553 = sum of:
        0.010763514 = weight(_text_:in in 3366) [ClassicSimilarity], result of:
          0.010763514 = score(doc=3366,freq=12.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.18406484 = fieldWeight in 3366, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3366)
        0.052712012 = weight(_text_:education in 3366) [ClassicSimilarity], result of:
          0.052712012 = score(doc=3366,freq=2.0), product of:
            0.2025344 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.042989567 = queryNorm
            0.260262 = fieldWeight in 3366, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3366)
      0.33333334 = coord(2/6)
    
    Abstract
    Documents in rich text corpora usually contain multiple facets of information. For example, an article about a specific disease often consists of different facets such as symptom, treatment, cause, diagnosis, prognosis, and prevention. Thus, documents may have different relations based on different facets. Powerful search tools have been developed to help users locate lists of individual documents that are most related to specific keywords. However, there is a lack of effective analysis tools that reveal the multifaceted relations of documents within or cross the document clusters. In this paper, we present FacetAtlas, a multifaceted visualization technique for visually analyzing rich text corpora. FacetAtlas combines search technology with advanced visual analytical tools to convey both global and local patterns simultaneously. We describe several unique aspects of FacetAtlas, including (1) node cliques and multifaceted edges, (2) an optimized density map, and (3) automated opacity pattern enhancement for highlighting visual patterns, (4) interactive context switch between facets. In addition, we demonstrate the power of FacetAtlas through a case study that targets patient education in the health care domain. Our evaluation shows the benefits of this work, especially in support of complex multifaceted data analysis.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  6. Zhang, J.; Zhao, Y.: ¬A user term visualization analysis based on a social question and answer log (2013) 0.02
    0.020107657 = product of:
      0.060322966 = sum of:
        0.0076109543 = weight(_text_:in in 2715) [ClassicSimilarity], result of:
          0.0076109543 = score(doc=2715,freq=6.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.1301535 = fieldWeight in 2715, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2715)
        0.052712012 = weight(_text_:education in 2715) [ClassicSimilarity], result of:
          0.052712012 = score(doc=2715,freq=2.0), product of:
            0.2025344 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.042989567 = queryNorm
            0.260262 = fieldWeight in 2715, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2715)
      0.33333334 = coord(2/6)
    
    Abstract
    The authors of this paper investigate terms of consumers' diabetes based on a log from the Yahoo!Answers social question and answers (Q&A) forum, ascertain characteristics and relationships among terms related to diabetes from the consumers' perspective, and reveal users' diabetes information seeking patterns. In this study, the log analysis method, data coding method, and visualization multiple-dimensional scaling analysis method were used for analysis. The visual analyses were conducted at two levels: terms analysis within a category and category analysis among the categories in the schema. The findings show that the average number of words per question was 128.63, the average number of sentences per question was 8.23, the average number of words per response was 254.83, and the average number of sentences per response was 16.01. There were 12 categories (Cause & Pathophysiology, Sign & Symptom, Diagnosis & Test, Organ & Body Part, Complication & Related Disease, Medication, Treatment, Education & Info Resource, Affect, Social & Culture, Lifestyle, and Nutrient) in the diabetes related schema which emerged from the data coding analysis. The analyses at the two levels show that terms and categories were clustered and patterns were revealed. Future research directions are also included.
  7. Beagle, D.: Visualizing keyword distribution across multidisciplinary c-space (2003) 0.01
    0.014270993 = product of:
      0.042812977 = sum of:
        0.011185773 = weight(_text_:in in 1202) [ClassicSimilarity], result of:
          0.011185773 = score(doc=1202,freq=36.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.1912858 = fieldWeight in 1202, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1202)
        0.031627204 = weight(_text_:education in 1202) [ClassicSimilarity], result of:
          0.031627204 = score(doc=1202,freq=2.0), product of:
            0.2025344 = queryWeight, product of:
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.042989567 = queryNorm
            0.1561572 = fieldWeight in 1202, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7112455 = idf(docFreq=1080, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1202)
      0.33333334 = coord(2/6)
    
    Abstract
    The concept of c-space is proposed as a visualization schema relating containers of content to cataloging surrogates and classification structures. Possible applications of keyword vector clusters within c-space could include improved retrieval rates through the use of captioning within visual hierarchies, tracings of semantic bleeding among subclasses, and access to buried knowledge within subject-neutral publication containers. The Scholastica Project is described as one example, following a tradition of research dating back to the 1980's. Preliminary focus group assessment indicates that this type of classification rendering may offer digital library searchers enriched entry strategies and an expanded range of re-entry vocabularies. Those of us who work in traditional libraries typically assume that our systems of classification: Library of Congress Classification (LCC) and Dewey Decimal Classification (DDC), are descriptive rather than prescriptive. In other words, LCC classes and subclasses approximate natural groupings of texts that reflect an underlying order of knowledge, rather than arbitrary categories prescribed by librarians to facilitate efficient shelving. Philosophical support for this assumption has traditionally been found in a number of places, from the archetypal tree of knowledge, to Aristotelian categories, to the concept of discursive formations proposed by Michel Foucault. Gary P. Radford has elegantly described an encounter with Foucault's discursive formations in the traditional library setting: "Just by looking at the titles on the spines, you can see how the books cluster together...You can identify those books that seem to form the heart of the discursive formation and those books that reside on the margins. Moving along the shelves, you see those books that tend to bleed over into other classifications and that straddle multiple discursive formations. You can physically and sensually experience...those points that feel like state borders or national boundaries, those points where one subject ends and another begins, or those magical places where one subject has morphed into another..."
    But what happens to this awareness in a digital library? Can discursive formations be represented in cyberspace, perhaps through diagrams in a visualization interface? And would such a schema be helpful to a digital library user? To approach this question, it is worth taking a moment to reconsider what Radford is looking at. First, he looks at titles to see how the books cluster. To illustrate, I scanned one hundred books on the shelves of a college library under subclass HT 101-395, defined by the LCC subclass caption as Urban groups. The City. Urban sociology. Of the first 100 titles in this sequence, fifty included the word "urban" or variants (e.g. "urbanization"). Another thirty-five used the word "city" or variants. These keywords appear to mark their titles as the heart of this discursive formation. The scattering of titles not using "urban" or "city" used related terms such as "town," "community," or in one case "skyscrapers." So we immediately see some empirical correlation between keywords and classification. But we also see a problem with the commonly used search technique of title-keyword. A student interested in urban studies will want to know about this entire subclass, and may wish to browse every title available therein. A title-keyword search on "urban" will retrieve only half of the titles, while a search on "city" will retrieve just over a third. There will be no overlap, since no titles in this sample contain both words. The only place where both words appear in a common string is in the LCC subclass caption, but captions are not typically indexed in library Online Public Access Catalogs (OPACs). In a traditional library, this problem is mitigated when the student goes to the shelf looking for any one of the books and suddenly discovers a much wider selection than the keyword search had led him to expect. But in a digital library, the issue of non-retrieval can be more problematic, as studies have indicated. Micco and Popp reported that, in a study funded partly by the U.S. Department of Education, 65 of 73 unskilled users searching for material on U.S./Soviet foreign relations found some material but never realized they had missed a large percentage of what was in the database.
  8. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.01
    0.013850367 = product of:
      0.0415511 = sum of:
        0.012428636 = weight(_text_:in in 2755) [ClassicSimilarity], result of:
          0.012428636 = score(doc=2755,freq=4.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.21253976 = fieldWeight in 2755, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2755)
        0.029122464 = product of:
          0.05824493 = sum of:
            0.05824493 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.05824493 = score(doc=2755,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    1. 2.2016 18:25:22
    Series
    Lecture notes in computer science ; 9398
  9. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.01128146 = product of:
      0.033844378 = sum of:
        0.009133145 = weight(_text_:in in 3355) [ClassicSimilarity], result of:
          0.009133145 = score(doc=3355,freq=6.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.1561842 = fieldWeight in 3355, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.024711233 = product of:
          0.049422465 = sum of:
            0.049422465 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.049422465 = score(doc=3355,freq=4.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
    Footnote
    Rez. in: JASIST 67(2017) no.2, S.533-536 (White, H.D.).
    LCSH
    Communication in science / Data processing
    Subject
    Communication in science / Data processing
  10. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.01
    0.010129899 = product of:
      0.030389696 = sum of:
        0.012916218 = weight(_text_:in in 3693) [ClassicSimilarity], result of:
          0.012916218 = score(doc=3693,freq=12.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.22087781 = fieldWeight in 3693, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3693)
        0.017473478 = product of:
          0.034946956 = sum of:
            0.034946956 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.034946956 = score(doc=3693,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Generally, Computer Science (CS) classifications are inconsistent in taxonomy strategies. t is necessary to develop CS taxonomy research to combine its historical perspective, its current knowledge and its predicted future trends - including all breakthroughs in information and communication technology. In this paper we have analyzed the ACM Computing Classification System (CCS) by means of visualization maps. The important achievement of current work is an effective visualization of classified documents from the ACM Digital Library. From the technical point of view, the innovation lies in the parallel use of analysis units: (sub)classes and keywords as well as a spherical 3D information surface. We have compared both the thematic and semantic maps of classified documents and results presented in Table 1. Furthermore, the proposed new method is used for content-related evaluation of the original scheme. Summing up: we improved an original ACM classification in the Computer Science domain by means of visualization.
    Date
    22. 7.2010 19:36:46
  11. Trunk, D.: Semantische Netze in Informationssystemen : Verbesserung der Suche durch Interaktion und Visualisierung (2005) 0.01
    0.009695257 = product of:
      0.02908577 = sum of:
        0.008700045 = weight(_text_:in in 2500) [ClassicSimilarity], result of:
          0.008700045 = score(doc=2500,freq=4.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.14877784 = fieldWeight in 2500, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2500)
        0.020385725 = product of:
          0.04077145 = sum of:
            0.04077145 = weight(_text_:22 in 2500) [ClassicSimilarity], result of:
              0.04077145 = score(doc=2500,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.2708308 = fieldWeight in 2500, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2500)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Semantische Netze unterstützen den Suchvorgang im Information Retrieval. Sie bestehen aus relationierten Begriffen und helfen dem Nutzer das richtige Vokabular zur Fragebildung zu finden. Eine leicht und intuitiv erfassbare Darstellung und eine interaktive Bedienungsmöglichkeit optimieren den Suchprozess mit der Begriffsstruktur. Als Interaktionsform bietet sich Hy-pertext mit dem etablierte Point- und Klickverfahren an. Eine Visualisierung zur Unterstützung kognitiver Fähigkeiten kann durch eine Darstellung der Informationen mit Hilfe von Punkten und Linien erfolgen. Vorgestellt wer-den die Anwendungsbeispiele Wissensnetz im Brockhaus multimedial, WordSurfer der Firma BiblioMondo, SpiderSearch der Firma BOND und Topic Maps Visualization in dandelon.com und im Portal Informationswis-senschaft der Firma AGI - Information Management Consultants.
    Date
    30. 1.2007 18:22:41
  12. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.01
    0.009339842 = product of:
      0.028019525 = sum of:
        0.010546046 = weight(_text_:in in 1289) [ClassicSimilarity], result of:
          0.010546046 = score(doc=1289,freq=8.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.18034597 = fieldWeight in 1289, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1289)
        0.017473478 = product of:
          0.034946956 = sum of:
            0.034946956 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.034946956 = score(doc=1289,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    QVIZ will research and create a framework for visualizing and querying archival resources by a time-space interface based on maps and emergent knowledge structures. The framework will also integrate social software, such as wikis, in order to utilize knowledge in existing and new communities of practice. QVIZ will lead to improved information sharing and knowledge creation, easier access to information in a user-adapted context and innovative ways of exploring and visualizing materials over time, between countries and other administrative units. The common European framework for sharing and accessing archival information provided by the QVIZ project will open a considerably larger commercial market based on archival materials as well as a richer understanding of European history.
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  13. Chen, C.: CiteSpace II : detecting and visualizing emerging trends and transient patterns in scientific literature (2006) 0.01
    0.008996623 = product of:
      0.026989868 = sum of:
        0.012428636 = weight(_text_:in in 5272) [ClassicSimilarity], result of:
          0.012428636 = score(doc=5272,freq=16.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.21253976 = fieldWeight in 5272, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5272)
        0.014561232 = product of:
          0.029122464 = sum of:
            0.029122464 = weight(_text_:22 in 5272) [ClassicSimilarity], result of:
              0.029122464 = score(doc=5272,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.19345059 = fieldWeight in 5272, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5272)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This article describes the latest development of a generic approach to detecting and visualizing emerging trends and transient patterns in scientific literature. The work makes substantial theoretical and methodological contributions to progressive knowledge domain visualization. A specialty is conceptualized and visualized as a time-variant duality between two fundamental concepts in information science: research fronts and intellectual bases. A research front is defined as an emergent and transient grouping of concepts and underlying research issues. The intellectual base of a research front is its citation and co-citation footprint in scientific literature - an evolving network of scientific publications cited by research-front concepts. Kleinberg's (2002) burst-detection algorithm is adapted to identify emergent research-front concepts. Freeman's (1979) betweenness centrality metric is used to highlight potential pivotal points of paradigm shift over time. Two complementary visualization views are designed and implemented: cluster views and time-zone views. The contributions of the approach are that (a) the nature of an intellectual base is algorithmically and temporally identified by emergent research-front terms, (b) the value of a co-citation cluster is explicitly interpreted in terms of research-front concepts, and (c) visually prominent and algorithmically detected pivotal points substantially reduce the complexity of a visualized network. The modeling and visualization process is implemented in CiteSpace II, a Java application, and applied to the analysis of two research fields: mass extinction (1981-2004) and terrorism (1990-2003). Prominent trends and pivotal points in visualized networks were verified in collaboration with domain experts, who are the authors of pivotal-point articles. Practical implications of the work are discussed. A number of challenges and opportunities for future studies are identified.
    Date
    22. 7.2006 16:11:05
  14. Batorowska, H.; Kaminska-Czubala, B.: Information retrieval support : visualisation of the information space of a document (2014) 0.01
    0.008107919 = product of:
      0.024323758 = sum of:
        0.012674771 = weight(_text_:in in 1444) [ClassicSimilarity], result of:
          0.012674771 = score(doc=1444,freq=26.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.2167489 = fieldWeight in 1444, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1444)
        0.011648986 = product of:
          0.023297971 = sum of:
            0.023297971 = weight(_text_:22 in 1444) [ClassicSimilarity], result of:
              0.023297971 = score(doc=1444,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.15476047 = fieldWeight in 1444, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1444)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Acquiring knowledge in any field involves information retrieval, i.e. searching the available documents to identify answers to the queries concerning the selected objects. Knowing the keywords which are names of the objects will enable situating the user's query in the information space organized as a thesaurus or faceted classification. Objectives: Identification the areas in the information space which correspond to gaps in the user's personal knowledge or in the domain knowledge might become useful in theory or practice. The aim of this paper is to present a realistic information-space model of a self-authored full-text document on information culture, indexed by the author of this article. Methodology: Having established the relations between the terms, particular modules (sets of terms connected by relations used in facet classification) are situated on a plain, similarly to a communication map. Conclusions drawn from the "journey" on the map, which is a visualization of the knowledge contained in the analysed document, are the crucial part of this paper. Results: The direct result of the research is the created model of information space visualization of a given document (book, article, website). The proposed procedure can practically be used as a new form of representation in order to map the contents of academic books and articles, beside the traditional index form, especially as an e-book auxiliary tool. In teaching, visualization of the information space of a document can be used to help students understand the issues of: classification, categorization and representation of new knowledge emerging in human mind.
    Series
    Advances in knowledge organization; vol. 14
    Source
    Knowledge organization in the 21st century: between historical patterns and future prospects. Proceedings of the Thirteenth International ISKO Conference 19-22 May 2014, Kraków, Poland. Ed.: Wieslaw Babik
  15. Jäger-Dengler-Harles, I.: Informationsvisualisierung und Retrieval im Fokus der Infromationspraxis (2013) 0.01
    0.007582167 = product of:
      0.022746501 = sum of:
        0.005273023 = weight(_text_:in in 1709) [ClassicSimilarity], result of:
          0.005273023 = score(doc=1709,freq=2.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.09017298 = fieldWeight in 1709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1709)
        0.017473478 = product of:
          0.034946956 = sum of:
            0.034946956 = weight(_text_:22 in 1709) [ClassicSimilarity], result of:
              0.034946956 = score(doc=1709,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.23214069 = fieldWeight in 1709, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1709)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Methoden und Techniken der Informationsvisualisierung werden seit ungefähr zwanzig Jahren im Bereich der Informationssuche eingesetzt. In dieser Literaturstudie werden ausgewählte Visualisierungsanwendungen der letzten Jahre vorgestellt. Sie betreffen zum einen den Retrievalprozess, das Boolesche Retrieval, die facettierte Suche, Dokumentbeziehungen, die Zufallssuche und Ergebnisanzeige, zum anderen spezielle Anwendungen wie die kartenbasierte und adaptive Visualisierung, Zitationsnetzwerke und Wissensordnungen. Die Einsatzszenarien für Applikationen der Informationsvisualisierung sind vielfältig. Sie reichen von mobilen kleinformatigen Anwendungen bis zu großformatigen Darstellungen auf hochauflösenden Bildschirmen, von integrativen Arbeitsplätzen für den einzelnen Nutzer bis zur Nutzung interaktiver Oberflächen für das kollaborative Retrieval. Das Konzept der Blended Library wird vorgestellt. Die Übertragbarkeit von Visualisierungsanwendungen auf Bibliothekskataloge wird im Hinblick auf die Nutzung des Kataloginputs und des Angebots an Sucheinstiegen geprüft. Perspektivische Überlegungen zu zukünftigen Entwicklungsschritten von Bibliothekskatalogen sowie zum Einfluss von Visualisierungsanwendungen auf die Informationspraxis werden angestellt.
    Date
    4. 2.2015 9:22:39
  16. Wu, I.-C.; Vakkari, P.: Effects of subject-oriented visualization tools on search by novices and intermediates (2018) 0.01
    0.007390729 = product of:
      0.022172187 = sum of:
        0.0076109543 = weight(_text_:in in 4573) [ClassicSimilarity], result of:
          0.0076109543 = score(doc=4573,freq=6.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.1301535 = fieldWeight in 4573, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4573)
        0.014561232 = product of:
          0.029122464 = sum of:
            0.029122464 = weight(_text_:22 in 4573) [ClassicSimilarity], result of:
              0.029122464 = score(doc=4573,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.19345059 = fieldWeight in 4573, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4573)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This study explores how user subject knowledge influences search task processes and outcomes, as well as how search behavior is influenced by subject-oriented information visualization (IV) tools. To enable integrated searches, the proposed WikiMap + integrates search functions and IV tools (i.e., a topic network and hierarchical topic tree) and gathers information from Wikipedia pages and Google Search results. To evaluate the effectiveness of the proposed interfaces, we design subject-oriented tasks and adopt extended evaluation measures. We recruited 48 novices and 48 knowledgeable users, that is, intermediates, for the evaluation. Our results show that novices using the proposed interface demonstrate better search performance than intermediates using Wikipedia. We therefore conclude that our tools help close the gap between novices and intermediates in information searches. The results also show that intermediates can take advantage of the search tool by leveraging the IV tools to browse subtopics, and formulate better queries with less effort. We conclude that embedding the IV and the search tools in the interface can result in different search behavior but improved task performance. We provide implications to design search systems to include IV features adapted to user levels of subject knowledge to help them achieve better task performance.
    Date
    9.12.2018 16:22:25
  17. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.01
    0.0069251833 = product of:
      0.02077555 = sum of:
        0.006214318 = weight(_text_:in in 1397) [ClassicSimilarity], result of:
          0.006214318 = score(doc=1397,freq=4.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.10626988 = fieldWeight in 1397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.014561232 = product of:
          0.029122464 = sum of:
            0.029122464 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
              0.029122464 = score(doc=1397,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.19345059 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The "Screen Design Manual" provides designers of interactive media with a practical working guide for preparing and presenting information that is suitable for both their target groups and the media they are using. It describes background information and relationships, clarifies them with the help of examples, and encourages further development of the language of digital media. In addition to the basics of the psychology of perception and learning, ergonomics, communication theory, imagery research, and aesthetics, the book also explores the design of navigation and orientation elements. Guidelines and checklists, along with the unique presentation of the book, support the application of information in practice.
    Date
    22. 3.2008 14:29:25
  18. Wu, K.-C.; Hsieh, T.-Y.: Affective choosing of clustering and categorization representations in e-book interfaces (2016) 0.01
    0.0069251833 = product of:
      0.02077555 = sum of:
        0.006214318 = weight(_text_:in in 3070) [ClassicSimilarity], result of:
          0.006214318 = score(doc=3070,freq=4.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.10626988 = fieldWeight in 3070, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3070)
        0.014561232 = product of:
          0.029122464 = sum of:
            0.029122464 = weight(_text_:22 in 3070) [ClassicSimilarity], result of:
              0.029122464 = score(doc=3070,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.19345059 = fieldWeight in 3070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3070)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - The purpose of this paper is to investigate user experiences with a touch-wall interface featuring both clustering and categorization representations of available e-books in a public library to understand human information interactions under work-focused and recreational contexts. Design/methodology/approach - Researchers collected questionnaires from 251 New Taipei City Library visitors who used the touch-wall interface to search for new titles. The authors applied structural equation modelling to examine relationships among hedonic/utilitarian needs, clustering and categorization representations, perceived ease of use (EU) and the extent to which users experienced anxiety and uncertainty (AU) while interacting with the interface. Findings - Utilitarian users who have an explicit idea of what they intend to find tend to prefer the categorization interface. A hedonic-oriented user tends to prefer clustering interfaces. Users reported EU regardless of which interface they engaged with. Results revealed that use of the clustering interface had a negative correlation with AU. Users that seek to satisfy utilitarian needs tended to emphasize the importance of perceived EU, whilst pleasure-seeking users were a little more tolerant of anxiety or uncertainty. Originality/value - The Online Public Access Catalogue (OPAC) encourages library visitors to borrow digital books through the implementation of an information visualization system. This situation poses an opportunity to validate uses and gratification theory. People with hedonic/utilitarian needs displayed different risk-control attitudes and affected uncertainty using the interface. Knowledge about user interaction with such interfaces is vital when launching the development of a new OPAC.
    Date
    20. 1.2015 18:30:22
  19. Osinska, V.; Kowalska, M.; Osinski, Z.: ¬The role of visualization in the shaping and exploration of the individual information space : part 1 (2018) 0.01
    0.0069251833 = product of:
      0.02077555 = sum of:
        0.006214318 = weight(_text_:in in 4641) [ClassicSimilarity], result of:
          0.006214318 = score(doc=4641,freq=4.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.10626988 = fieldWeight in 4641, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4641)
        0.014561232 = product of:
          0.029122464 = sum of:
            0.029122464 = weight(_text_:22 in 4641) [ClassicSimilarity], result of:
              0.029122464 = score(doc=4641,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.19345059 = fieldWeight in 4641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4641)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Studies on the state and structure of digital knowledge concerning science generally relate to macro and meso scales. Supported by visualizations, these studies can deliver knowledge about emerging scientific fields or collaboration between countries, scientific centers, or groups of researchers. Analyses of individual activities or single scientific career paths are rarely presented and discussed. The authors decided to fill this gap and developed a web application for visualizing the scientific output of particular researchers. This free software based on bibliographic data from local databases, provides six layouts for analysis. Researchers can see the dynamic characteristics of their own writing activity, the time and place of publication, and the thematic scope of research problems. They can also identify cooperation networks, and consequently, study the dependencies and regularities in their own scientific activity. The current article presents the results of a study of the application's usability and functionality as well as attempts to define different user groups. A survey about the interface was sent to select researchers employed at Nicolaus Copernicus University. The results were used to answer the question as to whether such a specialized visualization tool can significantly augment the individual information space of the contemporary researcher.
    Date
    21.12.2018 17:22:13
  20. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.01
    0.006743009 = product of:
      0.020229027 = sum of:
        0.011492288 = weight(_text_:in in 3035) [ClassicSimilarity], result of:
          0.011492288 = score(doc=3035,freq=38.0), product of:
            0.058476754 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.042989567 = queryNorm
            0.19652747 = fieldWeight in 3035, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3035)
        0.008736739 = product of:
          0.017473478 = sum of:
            0.017473478 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
              0.017473478 = score(doc=3035,freq=2.0), product of:
                0.15054214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042989567 = queryNorm
                0.116070345 = fieldWeight in 3035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3035)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    A PICTURE is said to be worth a thousand words. That metaphor might be expected to pertain a fortiori in the case of scientific papers, where a figure can brilliantly illuminate an idea that might otherwise be baffling. Papers with figures in them should thus be easier to grasp than those without. They should therefore reach larger audiences and, in turn, be more influential simply by virtue of being more widely read. But are they?
    Content
    Bill Howe and his colleagues at the University of Washington, in Seattle, decided to find out. First, they trained a computer algorithm to distinguish between various sorts of figures-which they defined as diagrams, equations, photographs, plots (such as bar charts and scatter graphs) and tables. They exposed their algorithm to between 400 and 600 images of each of these types of figure until it could distinguish them with an accuracy greater than 90%. Then they set it loose on the more-than-650,000 papers (containing more than 10m figures) stored on PubMed Central, an online archive of biomedical-research articles. To measure each paper's influence, they calculated its article-level Eigenfactor score-a modified version of the PageRank algorithm Google uses to provide the most relevant results for internet searches. Eigenfactor scoring gives a better measure than simply noting the number of times a paper is cited elsewhere, because it weights citations by their influence. A citation in a paper that is itself highly cited is worth more than one in a paper that is not.
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
    Dr Howe and his colleagues do, however, believe that the study of diagrams can result in new insights. A figure showing new metabolic pathways in a cell, for example, may summarise hundreds of experiments. Since illustrations can convey important scientific concepts in this way, they think that browsing through related figures from different papers may help researchers come up with new theories. As Dr Howe puts it, "the unit of scientific currency is closer to the figure than to the paper." With this thought in mind, the team have created a website (viziometrics.org (http://viziometrics.org/) ) where the millions of images sorted by their program can be searched using key words. Their next plan is to extract the information from particular types of scientific figure, to create comprehensive "super" figures: a giant network of all the known chemical processes in a cell for example, or the best-available tree of life. At just one such superfigure per paper, though, the citation records of articles containing such all-embracing diagrams may very well undermine the correlation that prompted their creation in the first place. Call it the ultimate marriage of chart and science.

Years

Languages

  • e 131
  • d 43
  • a 1
  • More… Less…

Types

  • a 140
  • el 30
  • m 15
  • x 12
  • r 2
  • s 2
  • b 1
  • p 1
  • More… Less…

Subjects

Classifications