Search (146 results, page 1 of 8)

  • × theme_ss:"Visualisierung"
  • × language_ss:"e"
  1. Thissen, F.: Screen-Design-Manual : Communicating Effectively Through Multimedia (2003) 0.03
    0.033866365 = product of:
      0.14514156 = sum of:
        0.008277881 = weight(_text_:und in 1397) [ClassicSimilarity], result of:
          0.008277881 = score(doc=1397,freq=4.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17315367 = fieldWeight in 1397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.0031180005 = weight(_text_:in in 1397) [ClassicSimilarity], result of:
          0.0031180005 = score(doc=1397,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.10626988 = fieldWeight in 1397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.008277881 = weight(_text_:und in 1397) [ClassicSimilarity], result of:
          0.008277881 = score(doc=1397,freq=4.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17315367 = fieldWeight in 1397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.05837661 = weight(_text_:einzelne in 1397) [ClassicSimilarity], result of:
          0.05837661 = score(doc=1397,freq=4.0), product of:
            0.12695427 = queryWeight, product of:
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.021569785 = queryNorm
            0.4598239 = fieldWeight in 1397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.05837661 = weight(_text_:einzelne in 1397) [ClassicSimilarity], result of:
          0.05837661 = score(doc=1397,freq=4.0), product of:
            0.12695427 = queryWeight, product of:
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.021569785 = queryNorm
            0.4598239 = fieldWeight in 1397, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.885746 = idf(docFreq=333, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.0014085418 = weight(_text_:s in 1397) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=1397,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 1397, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1397)
        0.0073060202 = product of:
          0.0146120405 = sum of:
            0.0146120405 = weight(_text_:22 in 1397) [ClassicSimilarity], result of:
              0.0146120405 = score(doc=1397,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.19345059 = fieldWeight in 1397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1397)
          0.5 = coord(1/2)
      0.23333333 = coord(7/30)
    
    Abstract
    The "Screen Design Manual" provides designers of interactive media with a practical working guide for preparing and presenting information that is suitable for both their target groups and the media they are using. It describes background information and relationships, clarifies them with the help of examples, and encourages further development of the language of digital media. In addition to the basics of the psychology of perception and learning, ergonomics, communication theory, imagery research, and aesthetics, the book also explores the design of navigation and orientation elements. Guidelines and checklists, along with the unique presentation of the book, support the application of information in practice.
    Classification
    ST 325 Informatik / Monographien / Einzelne Anwendungen der Datenverarbeitung / Multimedia
    ST 253 Informatik / Monographien / Software und -entwicklung / Web-Programmierwerkzeuge (A-Z)
    Date
    22. 3.2008 14:29:25
    Pages
    336 S
    RVK
    ST 325 Informatik / Monographien / Einzelne Anwendungen der Datenverarbeitung / Multimedia
    ST 253 Informatik / Monographien / Software und -entwicklung / Web-Programmierwerkzeuge (A-Z)
  2. Pejtersen, A.M.: ¬The BookHouse : an icon based database system for fiction retrieval in public libraries (1992) 0.02
    0.021747924 = product of:
      0.13048755 = sum of:
        0.0052914224 = weight(_text_:in in 3088) [ClassicSimilarity], result of:
          0.0052914224 = score(doc=3088,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.18034597 = fieldWeight in 3088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.09375 = fieldNorm(doc=3088)
        0.040605206 = weight(_text_:bibliotheken in 3088) [ClassicSimilarity], result of:
          0.040605206 = score(doc=3088,freq=2.0), product of:
            0.08127756 = queryWeight, product of:
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.021569785 = queryNorm
            0.49958694 = fieldWeight in 3088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.09375 = fieldNorm(doc=3088)
        0.040605206 = weight(_text_:bibliotheken in 3088) [ClassicSimilarity], result of:
          0.040605206 = score(doc=3088,freq=2.0), product of:
            0.08127756 = queryWeight, product of:
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.021569785 = queryNorm
            0.49958694 = fieldWeight in 3088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.09375 = fieldNorm(doc=3088)
        0.040605206 = weight(_text_:bibliotheken in 3088) [ClassicSimilarity], result of:
          0.040605206 = score(doc=3088,freq=2.0), product of:
            0.08127756 = queryWeight, product of:
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.021569785 = queryNorm
            0.49958694 = fieldWeight in 3088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.09375 = fieldNorm(doc=3088)
        0.0033805002 = weight(_text_:s in 3088) [ClassicSimilarity], result of:
          0.0033805002 = score(doc=3088,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.14414869 = fieldWeight in 3088, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.09375 = fieldNorm(doc=3088)
      0.16666667 = coord(5/30)
    
    Area
    Öffentliche Bibliotheken
    Pages
    S. - .
  3. Kraker, P.; Kittel, C,; Enkhbayar, A.: Open Knowledge Maps : creating a visual interface to the world's scientific knowledge based on natural language processing (2016) 0.02
    0.016287295 = product of:
      0.08143648 = sum of:
        0.0070240153 = weight(_text_:und in 3205) [ClassicSimilarity], result of:
          0.0070240153 = score(doc=3205,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.14692576 = fieldWeight in 3205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
        0.0064806426 = weight(_text_:in in 3205) [ClassicSimilarity], result of:
          0.0064806426 = score(doc=3205,freq=12.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.22087781 = fieldWeight in 3205, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
        0.0070240153 = weight(_text_:und in 3205) [ClassicSimilarity], result of:
          0.0070240153 = score(doc=3205,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.14692576 = fieldWeight in 3205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
        0.020302603 = weight(_text_:bibliotheken in 3205) [ClassicSimilarity], result of:
          0.020302603 = score(doc=3205,freq=2.0), product of:
            0.08127756 = queryWeight, product of:
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.021569785 = queryNorm
            0.24979347 = fieldWeight in 3205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
        0.020302603 = weight(_text_:bibliotheken in 3205) [ClassicSimilarity], result of:
          0.020302603 = score(doc=3205,freq=2.0), product of:
            0.08127756 = queryWeight, product of:
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.021569785 = queryNorm
            0.24979347 = fieldWeight in 3205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
        0.020302603 = weight(_text_:bibliotheken in 3205) [ClassicSimilarity], result of:
          0.020302603 = score(doc=3205,freq=2.0), product of:
            0.08127756 = queryWeight, product of:
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.021569785 = queryNorm
            0.24979347 = fieldWeight in 3205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.768121 = idf(docFreq=2775, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
      0.2 = coord(6/30)
    
    Abstract
    The goal of Open Knowledge Maps is to create a visual interface to the world's scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps.
    Content
    Beitrag in einem Themenschwerpunkt 'Computerlinguistik und Bibliotheken'. Vgl.: http://0277.ch/ojs/index.php/cdrs_0277/article/view/157/355.
  4. Bekavac, B.; Herget, J.; Hierl, S.; Öttl, S.: Visualisierungskomponenten bei webbasierten Suchmaschinen : Methoden, Kriterien und ein Marktüberblick (2007) 0.01
    0.0066485857 = product of:
      0.04986439 = sum of:
        0.021681098 = weight(_text_:und in 399) [ClassicSimilarity], result of:
          0.021681098 = score(doc=399,freq=14.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.4535172 = fieldWeight in 399, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
        0.0030866629 = weight(_text_:in in 399) [ClassicSimilarity], result of:
          0.0030866629 = score(doc=399,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.10520181 = fieldWeight in 399, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
        0.021681098 = weight(_text_:und in 399) [ClassicSimilarity], result of:
          0.021681098 = score(doc=399,freq=14.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.4535172 = fieldWeight in 399, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
        0.0034155329 = weight(_text_:s in 399) [ClassicSimilarity], result of:
          0.0034155329 = score(doc=399,freq=6.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.14564252 = fieldWeight in 399, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=399)
      0.13333334 = coord(4/30)
    
    Abstract
    Bei webbasierten Suchmaschinen werden zunehmend auch Systeme mit Visualisierungskomponenten für die Ergebnisrepräsentation angeboten. Die Ansätze der Visualisierungen unterscheiden sich hierbei in der Zielsetzung und Ausführung deutlich voneinander. Der folgende Beitrag beschreibt die verwendeten Visualisierungsmethoden, systematisiert diese anhand einer Klassifikation, stellt die führenden frei zugänglichen Systeme vor und vergleicht diese anhand der Kriterien aus der Systematisierung. Die typischen Problemfelder werden identifiziert und die wichtigsten Gemeinsamkeiten und Unterschiede der untersuchten Systeme herausgearbeitet. Die Vorstellung zweier innovativer Visualisierungskonzepte im Bereich der Relationenvisualisierung innerhalb von Treffermengen und der Visualisierung von Relationen bei der Suche nach Musik schließen den Beitrag ab.
    Source
    Information - Wissenschaft und Praxis. 58(2007) H.3, S.149-158
  5. Börner, K.: Atlas of knowledge : anyone can map (2015) 0.01
    0.0065397546 = product of:
      0.039238527 = sum of:
        0.009933459 = weight(_text_:und in 3355) [ClassicSimilarity], result of:
          0.009933459 = score(doc=3355,freq=4.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.20778441 = fieldWeight in 3355, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.0045825066 = weight(_text_:in in 3355) [ClassicSimilarity], result of:
          0.0045825066 = score(doc=3355,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1561842 = fieldWeight in 3355, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.009933459 = weight(_text_:und in 3355) [ClassicSimilarity], result of:
          0.009933459 = score(doc=3355,freq=4.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.20778441 = fieldWeight in 3355, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.002390375 = weight(_text_:s in 3355) [ClassicSimilarity], result of:
          0.002390375 = score(doc=3355,freq=4.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.101928525 = fieldWeight in 3355, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=3355)
        0.012398728 = product of:
          0.024797456 = sum of:
            0.024797456 = weight(_text_:22 in 3355) [ClassicSimilarity], result of:
              0.024797456 = score(doc=3355,freq=4.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.32829654 = fieldWeight in 3355, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3355)
          0.5 = coord(1/2)
      0.16666667 = coord(5/30)
    
    BK
    02.10 Wissenschaft und Gesellschaft
    Classification
    02.10 Wissenschaft und Gesellschaft
    Date
    22. 1.2017 16:54:03
    22. 1.2017 17:10:56
    Footnote
    Rez. in: JASIST 67(2017) no.2, S.533-536 (White, H.D.).
    LCSH
    Communication in science / Data processing
    Pages
    XI, 211 S
    Subject
    Communication in science / Data processing
  6. Kocijan, K.: Visualizing natural language resources (2015) 0.01
    0.0055586295 = product of:
      0.055586293 = sum of:
        0.048359692 = weight(_text_:informationswissenschaft in 2995) [ClassicSimilarity], result of:
          0.048359692 = score(doc=2995,freq=2.0), product of:
            0.09716552 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.021569785 = queryNorm
            0.49770427 = fieldWeight in 2995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.078125 = fieldNorm(doc=2995)
        0.004409519 = weight(_text_:in in 2995) [ClassicSimilarity], result of:
          0.004409519 = score(doc=2995,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.15028831 = fieldWeight in 2995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2995)
        0.0028170836 = weight(_text_:s in 2995) [ClassicSimilarity], result of:
          0.0028170836 = score(doc=2995,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.120123915 = fieldWeight in 2995, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=2995)
      0.1 = coord(3/30)
    
    Pages
    S.203-216
    Series
    Schriften zur Informationswissenschaft; Bd.66
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  7. Christoforidis, A.; Heuwing, B.; Mandl, T.: Visualising topics in document collections : an analysis of the interpretation process of historians (2017) 0.00
    0.0044469037 = product of:
      0.044469036 = sum of:
        0.038687754 = weight(_text_:informationswissenschaft in 3555) [ClassicSimilarity], result of:
          0.038687754 = score(doc=3555,freq=2.0), product of:
            0.09716552 = queryWeight, product of:
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.021569785 = queryNorm
            0.3981634 = fieldWeight in 3555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.504705 = idf(docFreq=1328, maxDocs=44218)
              0.0625 = fieldNorm(doc=3555)
        0.003527615 = weight(_text_:in in 3555) [ClassicSimilarity], result of:
          0.003527615 = score(doc=3555,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.120230645 = fieldWeight in 3555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=3555)
        0.002253667 = weight(_text_:s in 3555) [ClassicSimilarity], result of:
          0.002253667 = score(doc=3555,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 3555, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=3555)
      0.1 = coord(3/30)
    
    Pages
    S.37-49
    Series
    Schriften zur Informationswissenschaft; Bd. 70
  8. Neubauer, G.: Visualization of typed links in linked data (2017) 0.00
    0.003818758 = product of:
      0.028640684 = sum of:
        0.011706693 = weight(_text_:und in 3912) [ClassicSimilarity], result of:
          0.011706693 = score(doc=3912,freq=8.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.24487628 = fieldWeight in 3912, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.0038187557 = weight(_text_:in in 3912) [ClassicSimilarity], result of:
          0.0038187557 = score(doc=3912,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1301535 = fieldWeight in 3912, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.011706693 = weight(_text_:und in 3912) [ClassicSimilarity], result of:
          0.011706693 = score(doc=3912,freq=8.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.24487628 = fieldWeight in 3912, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
        0.0014085418 = weight(_text_:s in 3912) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=3912,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 3912, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3912)
      0.13333334 = coord(4/30)
    
    Abstract
    Das Themengebiet der Arbeit behandelt Visualisierungen von typisierten Links in Linked Data. Die wissenschaftlichen Gebiete, die im Allgemeinen den Inhalt des Beitrags abgrenzen, sind das Semantic Web, das Web of Data und Informationsvisualisierung. Das Semantic Web, das von Tim Berners Lee 2001 erfunden wurde, stellt eine Erweiterung zum World Wide Web (Web 2.0) dar. Aktuelle Forschungen beziehen sich auf die Verknüpfbarkeit von Informationen im World Wide Web. Um es zu ermöglichen, solche Verbindungen wahrnehmen und verarbeiten zu können sind Visualisierungen die wichtigsten Anforderungen als Hauptteil der Datenverarbeitung. Im Zusammenhang mit dem Sematic Web werden Repräsentationen von zusammenhängenden Informationen anhand von Graphen gehandhabt. Der Grund des Entstehens dieser Arbeit ist in erster Linie die Beschreibung der Gestaltung von Linked Data-Visualisierungskonzepten, deren Prinzipien im Rahmen einer theoretischen Annäherung eingeführt werden. Anhand des Kontexts führt eine schrittweise Erweiterung der Informationen mit dem Ziel, praktische Richtlinien anzubieten, zur Vernetzung dieser ausgearbeiteten Gestaltungsrichtlinien. Indem die Entwürfe zweier alternativer Visualisierungen einer standardisierten Webapplikation beschrieben werden, die Linked Data als Netzwerk visualisiert, konnte ein Test durchgeführt werden, der deren Kompatibilität zum Inhalt hatte. Der praktische Teil behandelt daher die Designphase, die Resultate, und zukünftige Anforderungen des Projektes, die durch die Testung ausgearbeitet wurden.
    Source
    Mitteilungen der Vereinigung Österreichischer Bibliothekarinnen und Bibliothekare. 70(2017) H.2, S.179-199
  9. Pfeffer, M.; Eckert, K.; Stuckenschmidt, H.: Visual analysis of classification systems and library collections (2008) 0.00
    0.0036125847 = product of:
      0.027094383 = sum of:
        0.009365354 = weight(_text_:und in 317) [ClassicSimilarity], result of:
          0.009365354 = score(doc=317,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.19590102 = fieldWeight in 317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=317)
        0.006110009 = weight(_text_:in in 317) [ClassicSimilarity], result of:
          0.006110009 = score(doc=317,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 317, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=317)
        0.009365354 = weight(_text_:und in 317) [ClassicSimilarity], result of:
          0.009365354 = score(doc=317,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.19590102 = fieldWeight in 317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=317)
        0.002253667 = weight(_text_:s in 317) [ClassicSimilarity], result of:
          0.002253667 = score(doc=317,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.09609913 = fieldWeight in 317, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0625 = fieldNorm(doc=317)
      0.13333334 = coord(4/30)
    
    Abstract
    In this demonstration we present a visual analysis approach that addresses both developers and users of hierarchical classification systems. The approach supports an intuitive understanding of the structure and current use in relation to a specific collection. We will also demonstrate its application for the development and management of library collections.
    Pages
    S.436-439
    Series
    Lecture notes in computer science ; 5173
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  10. Representation in scientific practice revisited (2014) 0.00
    0.0036055923 = product of:
      0.02704194 = sum of:
        0.009365354 = weight(_text_:und in 3543) [ClassicSimilarity], result of:
          0.009365354 = score(doc=3543,freq=8.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.19590102 = fieldWeight in 3543, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3543)
        0.0063594985 = weight(_text_:in in 3543) [ClassicSimilarity], result of:
          0.0063594985 = score(doc=3543,freq=26.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2167489 = fieldWeight in 3543, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=3543)
        0.009365354 = weight(_text_:und in 3543) [ClassicSimilarity], result of:
          0.009365354 = score(doc=3543,freq=8.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.19590102 = fieldWeight in 3543, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=3543)
        0.001951733 = weight(_text_:s in 3543) [ClassicSimilarity], result of:
          0.001951733 = score(doc=3543,freq=6.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.0832243 = fieldWeight in 3543, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.03125 = fieldNorm(doc=3543)
      0.13333334 = coord(4/30)
    
    Abstract
    Representation in Scientific Practice, published by the MIT Press in 1990, helped coalesce a long-standing interest in scientific visualization among historians, philosophers, and sociologists of science and remains a touchstone for current investigations in science and technology studies. This volume revisits the topic, taking into account both the changing conceptual landscape of STS and the emergence of new imaging technologies in scientific practice. It offers cutting-edge research on a broad array of fields that study information as well as short reflections on the evolution of the field by leading scholars, including some of the contributors to the 1990 volume. The essays consider the ways in which viewing experiences are crafted in the digital era; the embodied nature of work with digital technologies; the constitutive role of materials and technologies -- from chalkboards to brain scans -- in the production of new scientific knowledge; the metaphors and images mobilized by communities of practice; and the status and significance of scientific imagery in professional and popular culture. ContributorsMorana Alac, Michael Barany, Anne Beaulieu, Annamaria Carusi, Catelijne Coopmans, Lorraine Daston, Sarah de Rijcke, Joseph Dumit, Emma Frow, Yann Giraud, Aud Sissel Hoel, Martin Kemp, Bruno Latour, John Law, Michael Lynch, Donald MacKenzie, Cyrus Mody, Natasha Myers, Rachel Prentice, Arie Rip, Martin Ruivenkamp, Lucy Suchman, Janet Vertesi, Steve Woolgar
    BK
    30.02 Philosophie und Theorie der Naturwissenschaften
    30.03 Methoden und Techniken in den Naturwissenschaften
    Classification
    30.02 Philosophie und Theorie der Naturwissenschaften
    30.03 Methoden und Techniken in den Naturwissenschaften
    Footnote
    Rez. in: JASIST 68(2017) no.4, S.1068-1069 (Hans-Jörg Rheinberger)
    Pages
    IX, 366 S
    Type
    s
  11. Tufte, E.R.: Envisioning information (1990) 0.00
    0.0033920307 = product of:
      0.02544023 = sum of:
        0.009933459 = weight(_text_:und in 3733) [ClassicSimilarity], result of:
          0.009933459 = score(doc=3733,freq=4.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.20778441 = fieldWeight in 3733, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3733)
        0.0026457112 = weight(_text_:in in 3733) [ClassicSimilarity], result of:
          0.0026457112 = score(doc=3733,freq=2.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.09017298 = fieldWeight in 3733, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3733)
        0.009933459 = weight(_text_:und in 3733) [ClassicSimilarity], result of:
          0.009933459 = score(doc=3733,freq=4.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.20778441 = fieldWeight in 3733, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3733)
        0.0029275995 = weight(_text_:s in 3733) [ClassicSimilarity], result of:
          0.0029275995 = score(doc=3733,freq=6.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.124836445 = fieldWeight in 3733, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=3733)
      0.13333334 = coord(4/30)
    
    BK
    70.03 Methoden, Techniken und Organisation der sozialwissenschaftlichen Forschung
    Classification
    70.03 Methoden, Techniken und Organisation der sozialwissenschaftlichen Forschung
    Footnote
    Rez. in: College & research libraries 52(1991) S.382-383 (P. Wilson); Knowledge organization 20(1993) no.1, S.61-62 (M. Giesecke)
    Pages
    126 S
  12. Eckert, K.: Thesaurus analysis and visualization in semantic search applications (2007) 0.00
    0.0032771442 = product of:
      0.02457858 = sum of:
        0.008277881 = weight(_text_:und in 3222) [ClassicSimilarity], result of:
          0.008277881 = score(doc=3222,freq=4.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17315367 = fieldWeight in 3222, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3222)
        0.006614278 = weight(_text_:in in 3222) [ClassicSimilarity], result of:
          0.006614278 = score(doc=3222,freq=18.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.22543246 = fieldWeight in 3222, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3222)
        0.008277881 = weight(_text_:und in 3222) [ClassicSimilarity], result of:
          0.008277881 = score(doc=3222,freq=4.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17315367 = fieldWeight in 3222, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3222)
        0.0014085418 = weight(_text_:s in 3222) [ClassicSimilarity], result of:
          0.0014085418 = score(doc=3222,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.060061958 = fieldWeight in 3222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3222)
      0.13333334 = coord(4/30)
    
    Abstract
    The use of thesaurus-based indexing is a common approach for increasing the performance of information retrieval. In this thesis, we examine the suitability of a thesaurus for a given set of information and evaluate improvements of existing thesauri to get better search results. On this area, we focus on two aspects: 1. We demonstrate an analysis of the indexing results achieved by an automatic document indexer and the involved thesaurus. 2. We propose a method for thesaurus evaluation which is based on a combination of statistical measures and appropriate visualization techniques that support the detection of potential problems in a thesaurus. In this chapter, we give an overview of the context of our work. Next, we briefly outline the basics of thesaurus-based information retrieval and describe the Collexis Engine that was used for our experiments. In Chapter 3, we describe two experiments in automatically indexing documents in the areas of medicine and economics with corresponding thesauri and compare the results to available manual annotations. Chapter 4 describes methods for assessing thesauri and visualizing the result in terms of a treemap. We depict examples of interesting observations supported by the method and show that we actually find critical problems. We conclude with a discussion of open questions and future research in Chapter 5.
    Imprint
    Mannheim : Fakultät für Mathematik und Informatik
    Pages
    74 S
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  13. Eckert, K.; Pfeffer, M.; Stuckenschmidt, H.: Assessing thesaurus-based annotations for semantic search applications (2008) 0.00
    0.0031610115 = product of:
      0.023707585 = sum of:
        0.008194685 = weight(_text_:und in 1528) [ClassicSimilarity], result of:
          0.008194685 = score(doc=1528,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 1528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1528)
        0.0053462577 = weight(_text_:in in 1528) [ClassicSimilarity], result of:
          0.0053462577 = score(doc=1528,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1822149 = fieldWeight in 1528, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1528)
        0.008194685 = weight(_text_:und in 1528) [ClassicSimilarity], result of:
          0.008194685 = score(doc=1528,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.17141339 = fieldWeight in 1528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1528)
        0.0019719584 = weight(_text_:s in 1528) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=1528,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 1528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1528)
      0.13333334 = coord(4/30)
    
    Abstract
    Statistical methods for automated document indexing are becoming an alternative to the manual assignment of keywords. We argue that the quality of the thesaurus used as a basis for indexing in regard to its ability to adequately cover the contents to be indexed and as a basis for the specific indexing method used is of crucial importance in automatic indexing. We present an interactive tool for thesaurus evaluation that is based on a combination of statistical measures and appropriate visualisation techniques that supports the detection of potential problems in a thesaurus. We describe the methods used and show that the tool supports the detection and correction of errors, leading to a better indexing result.
    Source
    International Journal of Metadata, Semantics and Ontologies. 3(2008) no.1, S.53-67
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  14. Eckert, K: ¬The ICE-map visualization (2011) 0.00
    0.0024840718 = product of:
      0.024840716 = sum of:
        0.009365354 = weight(_text_:und in 4743) [ClassicSimilarity], result of:
          0.009365354 = score(doc=4743,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.19590102 = fieldWeight in 4743, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4743)
        0.006110009 = weight(_text_:in in 4743) [ClassicSimilarity], result of:
          0.006110009 = score(doc=4743,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.2082456 = fieldWeight in 4743, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4743)
        0.009365354 = weight(_text_:und in 4743) [ClassicSimilarity], result of:
          0.009365354 = score(doc=4743,freq=2.0), product of:
            0.04780656 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.021569785 = queryNorm
            0.19590102 = fieldWeight in 4743, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4743)
      0.1 = coord(3/30)
    
    Abstract
    In this paper, we describe in detail the Information Content Evaluation Map (ICE-Map Visualization, formerly referred to as IC Difference Analysis). The ICE-Map Visualization is a visual data mining approach for all kinds of concept hierarchies that uses statistics about the concept usage to help a user in the evaluation and maintenance of the hierarchy. It consists of a statistical framework that employs the the notion of information content from information theory, as well as a visualization of the hierarchy and the result of the statistical analysis by means of a treemap.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  15. Platis, N. et al.: Visualization of uncertainty in tag clouds (2016) 0.00
    0.0023665128 = product of:
      0.023665126 = sum of:
        0.006236001 = weight(_text_:in in 2755) [ClassicSimilarity], result of:
          0.006236001 = score(doc=2755,freq=4.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.21253976 = fieldWeight in 2755, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.078125 = fieldNorm(doc=2755)
        0.0028170836 = weight(_text_:s in 2755) [ClassicSimilarity], result of:
          0.0028170836 = score(doc=2755,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.120123915 = fieldWeight in 2755, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.078125 = fieldNorm(doc=2755)
        0.0146120405 = product of:
          0.029224081 = sum of:
            0.029224081 = weight(_text_:22 in 2755) [ClassicSimilarity], result of:
              0.029224081 = score(doc=2755,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.38690117 = fieldWeight in 2755, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2755)
          0.5 = coord(1/2)
      0.1 = coord(3/30)
    
    Date
    1. 2.2016 18:25:22
    Pages
    S.127-132
    Series
    Lecture notes in computer science ; 9398
  16. Maaten, L. van den: Learning a parametric embedding by preserving local structure (2009) 0.00
    0.0018923183 = product of:
      0.018923182 = sum of:
        0.008166543 = weight(_text_:in in 3883) [ClassicSimilarity], result of:
          0.008166543 = score(doc=3883,freq=14.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.27833787 = fieldWeight in 3883, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3883)
        0.008784681 = product of:
          0.026354041 = sum of:
            0.026354041 = weight(_text_:l in 3883) [ClassicSimilarity], result of:
              0.026354041 = score(doc=3883,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.30739886 = fieldWeight in 3883, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3883)
          0.33333334 = coord(1/3)
        0.0019719584 = weight(_text_:s in 3883) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=3883,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 3883, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3883)
      0.1 = coord(3/30)
    
    Abstract
    The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.
    Source
    Proceedings of the Twelfth International Conference on Artificial Intelligence & Statistics (AI-STATS), JMLR W&CP 5, 2009. S.384-391
  17. Osinska, V.; Bala, P.: New methods for visualization and improvement of classification schemes : the case of computer science (2010) 0.00
    0.0016938116 = product of:
      0.016938116 = sum of:
        0.0064806426 = weight(_text_:in in 3693) [ClassicSimilarity], result of:
          0.0064806426 = score(doc=3693,freq=12.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.22087781 = fieldWeight in 3693, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3693)
        0.0016902501 = weight(_text_:s in 3693) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=3693,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 3693, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=3693)
        0.008767224 = product of:
          0.017534448 = sum of:
            0.017534448 = weight(_text_:22 in 3693) [ClassicSimilarity], result of:
              0.017534448 = score(doc=3693,freq=2.0), product of:
                0.07553371 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.021569785 = queryNorm
                0.23214069 = fieldWeight in 3693, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3693)
          0.5 = coord(1/2)
      0.1 = coord(3/30)
    
    Abstract
    Generally, Computer Science (CS) classifications are inconsistent in taxonomy strategies. t is necessary to develop CS taxonomy research to combine its historical perspective, its current knowledge and its predicted future trends - including all breakthroughs in information and communication technology. In this paper we have analyzed the ACM Computing Classification System (CCS) by means of visualization maps. The important achievement of current work is an effective visualization of classified documents from the ACM Digital Library. From the technical point of view, the innovation lies in the parallel use of analysis units: (sub)classes and keywords as well as a spherical 3D information surface. We have compared both the thematic and semantic maps of classified documents and results presented in Table 1. Furthermore, the proposed new method is used for content-related evaluation of the original scheme. Summing up: we improved an original ACM classification in the Computer Science domain by means of visualization.
    Date
    22. 7.2010 19:36:46
    Source
    Knowledge organization. 37(2010) no.3, S.157-172
  18. Maaten, L. van den: Accelerating t-SNE using Tree-Based Algorithms (2014) 0.00
    0.0016102897 = product of:
      0.016102897 = sum of:
        0.0053462577 = weight(_text_:in in 3886) [ClassicSimilarity], result of:
          0.0053462577 = score(doc=3886,freq=6.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.1822149 = fieldWeight in 3886, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3886)
        0.008784681 = product of:
          0.026354041 = sum of:
            0.026354041 = weight(_text_:l in 3886) [ClassicSimilarity], result of:
              0.026354041 = score(doc=3886,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.30739886 = fieldWeight in 3886, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3886)
          0.33333334 = coord(1/3)
        0.0019719584 = weight(_text_:s in 3886) [ClassicSimilarity], result of:
          0.0019719584 = score(doc=3886,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.08408674 = fieldWeight in 3886, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3886)
      0.1 = coord(3/30)
    
    Abstract
    The paper investigates the acceleration of t-SNE-an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots-using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N*logN). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant.
    Source
    Journal of machine learning research. 15(2014), S.3221-3245
  19. Nehmadi, L.; Meyer, J.; Parmet, Y.; Ben-Asher, N.: Predicting a screen area's perceived importance from spatial and physical attributes (2011) 0.00
    0.0015700618 = product of:
      0.015700618 = sum of:
        0.0064806426 = weight(_text_:in in 4757) [ClassicSimilarity], result of:
          0.0064806426 = score(doc=4757,freq=12.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.22087781 = fieldWeight in 4757, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4757)
        0.007529726 = product of:
          0.022589177 = sum of:
            0.022589177 = weight(_text_:l in 4757) [ClassicSimilarity], result of:
              0.022589177 = score(doc=4757,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.26348472 = fieldWeight in 4757, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4757)
          0.33333334 = coord(1/3)
        0.0016902501 = weight(_text_:s in 4757) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=4757,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 4757, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=4757)
      0.1 = coord(3/30)
    
    Abstract
    The editor's decision where and how to place items on a screen is crucial for the design of information displays, such as websites. We developed a statistical model that can facilitate automating this process by predicting the perceived importance of screen items from their location and size. The model was developed based on a 2-step experiment in which we asked participants to rate the importance of text articles that differed in size, screen location, and title size. Articles were either presented for 0.5 seconds or for unlimited time. In a stepwise regression analysis, the model's variables accounted for 65% of the variance in the importance ratings. In a validation study, the model predicted 85% of the variance of the mean apparent importance of screen items. The model also predicted individual raters' importance perception ratings. We discuss the implications of such a model in the context of automating layout generation. An automated system for layout generation can optimize data presentation to suit users' individual information and display preferences.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.9, S.1829-1838
  20. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.00
    0.0015700618 = product of:
      0.015700618 = sum of:
        0.0064806426 = weight(_text_:in in 3884) [ClassicSimilarity], result of:
          0.0064806426 = score(doc=3884,freq=12.0), product of:
            0.029340398 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.021569785 = queryNorm
            0.22087781 = fieldWeight in 3884, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3884)
        0.007529726 = product of:
          0.022589177 = sum of:
            0.022589177 = weight(_text_:l in 3884) [ClassicSimilarity], result of:
              0.022589177 = score(doc=3884,freq=2.0), product of:
                0.0857324 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.021569785 = queryNorm
                0.26348472 = fieldWeight in 3884, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3884)
          0.33333334 = coord(1/3)
        0.0016902501 = weight(_text_:s in 3884) [ClassicSimilarity], result of:
          0.0016902501 = score(doc=3884,freq=2.0), product of:
            0.023451481 = queryWeight, product of:
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.021569785 = queryNorm
            0.072074346 = fieldWeight in 3884, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.0872376 = idf(docFreq=40523, maxDocs=44218)
              0.046875 = fieldNorm(doc=3884)
      0.1 = coord(3/30)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
    Source
    Machine learning. 87(2012) no.1, S.33-55

Authors

Years

Types

  • a 128
  • el 27
  • m 10
  • s 3
  • x 2
  • b 1
  • p 1
  • r 1
  • More… Less…

Subjects

Classifications