Search (32 results, page 1 of 2)

  • × theme_ss:"Visualisierung"
  • × type_ss:"el"
  1. Jaklitsch, M.: Informationsvisualisierung am Beispiel des Begriffs Informationskompetenz : eine szientometrische Untersuchung unter Verwendung von BibExcel und VOSviewer (2016) 0.01
    0.0136871245 = product of:
      0.04106137 = sum of:
        0.009274333 = weight(_text_:in in 3067) [ClassicSimilarity], result of:
          0.009274333 = score(doc=3067,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1561842 = fieldWeight in 3067, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3067)
        0.031787038 = weight(_text_:und in 3067) [ClassicSimilarity], result of:
          0.031787038 = score(doc=3067,freq=10.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.328536 = fieldWeight in 3067, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3067)
      0.33333334 = coord(2/6)
    
    Abstract
    Zielsetzung - Aufgrund der rasch wachsenden Anzahl an Publikationen zur Informationskompetenz ergibt sich eine zunehmende Notwendigkeit von Überblicksarbeiten. Dieser Betrag hat das Ziel, mittels Science Mapping einen Überblick über die wissenschaftliche Literatur zu schaffen. Forschungsmethoden - Unter Verwendung von BibExcel und VOSviewer wurden 1589 wissenschaftliche Artikel analysiert und drei verschiedene Visualisierungen erstellt. Ergebnisse - Es gibt ein relativ großes internationales Autorennetzwerk, in welchem die meisten Hauptakteure miteinander in Verbindung stehen. Die wichtigsten Schwerpunkte sind: Vermittlung von Informationskompetenz im Hochschulbereich, Prozessmodelle zum Informationssuchverhalten, Phänomenographie und Informationskompetenz im beruflichen Umfeld. Schlussfolgerungen - Viele der Schwerpunkte wurden schon vereinzelt in Review-Artikeln genannt, aber noch nie via Science Mapping zusammen visualisiert. Somit ermöglicht diese Arbeit erstmalig ein »big picture« der Produktionslandschaft. Künftige Arbeiten könnten die Literatur mit anderen Science Mapping Tools bzw. Visualisierungstechniken untersuchen.
    Content
    Vgl.: https://yis.univie.ac.at/index.php/yis/article/view/1417/1251. Diesem Beitrag liegt folgende Abschlussarbeit zugrunde: Jaklitsch, Markus: Informationsvisualisierung am Beispiel des Begriffs Informationskompetenz: Eine szientometrische Untersuchung unter Verwendung von BibExcel und VOSviewer. Masterarbeit (MSc), Karl-Franzens-Universität Graz, 2015. Volltext: http://resolver.obvsg.at/urn:nbn:at:at-ubg:1-90404.
  2. Teutsch, K.: ¬Die Welt ist doch eine Scheibe : Google-Herausforderer eyePlorer (2009) 0.01
    0.01161235 = product of:
      0.03483705 = sum of:
        0.008347853 = weight(_text_:in in 2678) [ClassicSimilarity], result of:
          0.008347853 = score(doc=2678,freq=28.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.14058185 = fieldWeight in 2678, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2678)
        0.026489196 = weight(_text_:und in 2678) [ClassicSimilarity], result of:
          0.026489196 = score(doc=2678,freq=40.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.27378 = fieldWeight in 2678, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2678)
      0.33333334 = coord(2/6)
    
    Content
    "An einem trüben Novembertag 2008 sitzen zwei Männer an einem ovalen Konferenztisch. Sie befinden sich wie die meisten Geschäftstreibenden im Strudel der Finanzmärkte. Ihr Tisch steht im einzigen mehrstöckigen Nachwendebau der Berliner Karl-Marx-Allee. Links vom Fenster leuchtet die Spitze des Fernsehturms, rechts fällt der Blick auf kilometerlange Kachelfassaden. Die Verhandlungen mit den Investoren ziehen sich seit Wochen hin. Ein rhetorisches Ringen. Der Hirnforscher fragt: "Ist Wissen mit großem 'W' und wissen mit kleinem 'w' für Sie das Gleiche?" Der Vertriebsmann sagt: "Learntainment", "Knowledge Nuggets", "Mindmapping". Am Ende liegt ein unterschriebener Vertrag auf dem Tisch - an einem Tag, an dem Daimler laut über Kurzarbeit nachdenkt. Martin Hirsch und Ralf von Grafenstein genehmigen sich einen Piccolo. In der schwersten Wirtschaftskrise der Bundesrepublik haben sie für "eyePlorer" einen potenten Investor gefunden. Er hat die Tragweite ihrer Idee verstanden, und er hat begriffen: Die Welt ist eine Scheibe.
    Eine neue visuelle Ordnung Martin Hirsch ist der Enkel des Nobelpreisträgers Werner Heisenberg. Außerdem ist er Hirnforscher und beschäftigt sich seit Jahren mit der Frage: Was tut mein Kopf eigentlich, während ich hirnforsche? Ralf von Grafenstein ist Marketingexperte und spezialisiert auf Dienstleistungen im Internet. Zusammen haben sie also am 1. Dezember 2008 eine Firma in Berlin gegründet, deren Heiliger Gral besagte Scheibe ist, auf der - das ist die Idee - bald die ganze Welt, die Internetwelt zumindest, Platz finden soll. Die Scheibe heißt eyePlorer, was sich als Aufforderung an ihre Nutzer versteht. Die sollen auf einer neuartigen, eben scheibenförmigen Plattform die unermesslichen Datensätze des Internets in eine neue visuelle Ordnung bringen. Der Schlüssel dafür, da waren sich Hirsch und von Grafenstein sicher, liegt in der Hirnforschung, denn warum nicht die assoziativen Fähigkeiten des Menschen auf Suchmaschinen übertragen? Anbieter wie Google lassen von solchen Ansätzen bislang die Finger. Hier setzt man dafür auf Volltextprogramme, also sprachbegabte Systeme, die letztlich aber, genau wie die Schlagwortsuche, nur zu opak gerankten Linksammlungen führen. Weiter als auf Seite zwei des Suchergebnisses wagt sich der träge Nutzer meistens nicht vor. Weil sie niemals wahrgenommen wird, fällt eine Menge möglicherweise kostbare Information unter den Tisch.
    Skelett mit Sonnenbrille Hirsch sitzt in einem grell erleuchteten Konferenzraum. In der rechten Ecke steht ein Skelett, dem jemand eine Sonnenbrille aufgeklemmt hat. In der Hand hält Hirsch ein Modellgehirn, auf dem er im Rhythmus seines Sprachflusses mit den Fingern trommelt. Obwohl im Verlauf der nächsten Stunden erschreckend verwickelte Netzdiagramme zum Einsatz kommen, hält Hirsch sich an die Suggestivkraft des Bildes. Er sagt: "Das Primärerlebnis der Maschine ist bei Google das eines Jägers. Sie pirscht sich an eine Internetseite heran." Man denkt: "Genauso fühlt es sich an: Suchbegriff eingeben, 'enter' drücken, Website schießen!", schon kommt die Komplementärmetapher geschmeidig aus dem Köcher: Im Gegensatz zum Google-Jäger, sagt Hirsch, sei der eyePlorer ein Sammler, der stöbere, organisiere und dann von allem nasche. Hier werden Informationen, auf die handelsübliche Suchmaschinen nur verweisen, kulinarisch aufbereitet und zu Schwerpunkten verknüpft. Im Gegensatz zu ihren Vorgängern ist die Maschine ansatzweise intelligent. Sie findet im Laufe einer Sitzung heraus, worum es dem Benutzer geht, versteht den Zusammenhang von Suche und Inhalt und ist deshalb in der Lage, Empfehlungen auszusprechen.
    Einstein, Weizsäcker und Hitler Zu Demonstrationszwecken wird die eyePlorer-Scheibe an die Wand projiziert. Gibt man im kleinen Suchfeld in der Mitte den Namen Werner Heisenberg ein, verwandelt sich die Scheibe in einen Tortenboden. Die einzelnen Stücke entsprechen Kategorien wie "Person", "Technologie" oder "Organisation". Sie selbst sind mit bunten Knöpfen bedeckt, unter denen sich die Informationen verbergen. So kommt es, dass man beim Thema Heisenberg nicht nur auf die Kollegen Einstein, Weizsäcker und Schrödinger trifft, sondern auch auf Adolf Hitler. Ein Klick auf den entsprechenden Button stellt unter anderem heraus: Heisenberg kam 1933 unter Beschuss der SS, weil er sich nicht vor den Karren einer antisemitischen Physikbewegung spannen ließ. Nach diesem Prinzip spült die frei assoziierende Maschine vollautomatisch immer wieder neue Fakten an, um die der Nutzer zwar nicht gebeten hat, die ihn bei seiner Recherche aber möglicherweise unterstützen und die er später - die Maschine ist noch ausbaubedürftig - auch modellieren darf. Aber will man das, sich von einer Maschine beraten lassen? "Google ist wie ein Zoo", sekundiert Ralf von Grafenstein. "In einem Gehege steht eine Giraffe, im anderen ein Raubtier, aber die sind klar getrennt voneinander durch Gitter und Wege. Es gibt keine Möglichkeit, sie zusammen anzuschauen. Da kommen wir ins Spiel. Wir können Äpfel mit Birnen vergleichen!" Die Welt ist eine Scheibe oder die Scheibe eben eine Welt, auf der vieles mit vielem zusammenhängt und manches auch mit nichts. Der Vorteil dieser Maschine ist, dass sie in Zukunft Sinn stiften könnte, wo andere nur spröde auf Quellen verweisen. "Google ist ja ein unheimlich heterogenes Erlebnis mit ständigen Wartezeiten und Mausklicks dazwischen. Das kostet mich viel zu viel Metagedankenkraft", sagt Hirsch. "Wir wollten eine Maschine mit einer ästhetisch ansprechenden Umgebung bauen, aus der ich mich kaum wegbewege, denn sie liefert mir Informationen in meinen Gedanken hinein."
    Wenn die Maschine denkt Zur Hybris des Projekts passt, dass der eyePlorer ursprünglich HAL heißen sollte - wie der außer Rand und Band geratene Bordcomputer aus Kubricks "2001: Odyssee im Weltraum". Wenn man die Buchstaben aber jeweils um eine Alphabetposition nach rechts verrückt, ergibt sich IBM. Was passiert mit unserem Wissen, wenn die Maschine selbst anfängt zu denken? Ralf von Grafenstein macht ein ernstes Gesicht. "Es ist nicht unser Ansinnen, sie alleinzulassen. Es geht bei uns ja nicht nur darum, zu finden, sondern auch mitzumachen. Die Community ist wichtig. Der Dialog ist beiderseitig." Der Lotse soll in Form einer wachsamen Gemeinschaft also an Bord bleiben. Begünstigt wird diese Annahme auch durch die aufkommenden Anfasstechnologien, mit denen das iPhone derzeit so erfolgreich ist: "Allein zehn Prozent der menschlichen Gehirnleistung gehen auf den Pinzettengriff zurück." Martin Hirsch wundert sich, dass diese Erkenntnis von der IT-Branche erst jetzt berücksichtigt wird. Auf berührungssensiblen Bildschirmen sollen die Nutzer mit wenigen Handgriffen bald spielerisch Inhalte schaffen und dem System zur Verfügung stellen. So wird aus der Suchmaschine ein "Sparringspartner" und aus einem Informationsknopf ein "Knowledge Nugget". Wie auch immer man die Erkenntniszutaten des Internetgroßmarkts serviert: Wissen als Zeitwort ist ein länglicher Prozess. Im Moment sei die Maschine noch auf dem Stand eines Zweijährigen, sagen ihre Schöpfer. Sozialisiert werden soll sie demnächst im Internet, ihre Erziehung erfolgt dann durch die Nutzer. Als er Martin Hirsch mit seiner Scheibe zum ersten Mal gesehen habe, dachte Ralf von Grafenstein: "Das ist überfällig! Das wird kommen! Das muss raus!" Jetzt ist es da, klein, unschuldig und unscheinbar. Man findet es bei Google."
  3. Eckert, K: ¬The ICE-map visualization (2011) 0.01
    0.010439968 = product of:
      0.0313199 = sum of:
        0.012365777 = weight(_text_:in in 4743) [ClassicSimilarity], result of:
          0.012365777 = score(doc=4743,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2082456 = fieldWeight in 4743, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=4743)
        0.018954126 = weight(_text_:und in 4743) [ClassicSimilarity], result of:
          0.018954126 = score(doc=4743,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.19590102 = fieldWeight in 4743, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4743)
      0.33333334 = coord(2/6)
    
    Abstract
    In this paper, we describe in detail the Information Content Evaluation Map (ICE-Map Visualization, formerly referred to as IC Difference Analysis). The ICE-Map Visualization is a visual data mining approach for all kinds of concept hierarchies that uses statistics about the concept usage to help a user in the evaluation and maintenance of the hierarchy. It consists of a statistical framework that employs the the notion of information content from information theory, as well as a visualization of the hierarchy and the result of the statistical analysis by means of a treemap.
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  4. Palm, F.: QVIZ : Query and context based visualization of time-spatial cultural dynamics (2007) 0.01
    0.009484224 = product of:
      0.028452672 = sum of:
        0.010709076 = weight(_text_:in in 1289) [ClassicSimilarity], result of:
          0.010709076 = score(doc=1289,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 1289, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1289)
        0.017743597 = product of:
          0.035487194 = sum of:
            0.035487194 = weight(_text_:22 in 1289) [ClassicSimilarity], result of:
              0.035487194 = score(doc=1289,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.23214069 = fieldWeight in 1289, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1289)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    QVIZ will research and create a framework for visualizing and querying archival resources by a time-space interface based on maps and emergent knowledge structures. The framework will also integrate social software, such as wikis, in order to utilize knowledge in existing and new communities of practice. QVIZ will lead to improved information sharing and knowledge creation, easier access to information in a user-adapted context and innovative ways of exploring and visualizing materials over time, between countries and other administrative units. The common European framework for sharing and accessing archival information provided by the QVIZ project will open a considerably larger commercial market based on archival materials as well as a richer understanding of European history.
    Content
    Vortrag anlässlich des Workshops: "Extending the multilingual capacity of The European Library in the EDL project Stockholm, Swedish National Library, 22-23 November 2007".
  5. Kraker, P.; Kittel, C,; Enkhbayar, A.: Open Knowledge Maps : creating a visual interface to the world's scientific knowledge based on natural language processing (2016) 0.01
    0.0091104945 = product of:
      0.027331483 = sum of:
        0.013115887 = weight(_text_:in in 3205) [ClassicSimilarity], result of:
          0.013115887 = score(doc=3205,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 3205, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
        0.014215595 = weight(_text_:und in 3205) [ClassicSimilarity], result of:
          0.014215595 = score(doc=3205,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.14692576 = fieldWeight in 3205, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.046875 = fieldNorm(doc=3205)
      0.33333334 = coord(2/6)
    
    Abstract
    The goal of Open Knowledge Maps is to create a visual interface to the world's scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps.
    Content
    Beitrag in einem Themenschwerpunkt 'Computerlinguistik und Bibliotheken'. Vgl.: http://0277.ch/ojs/index.php/cdrs_0277/article/view/157/355.
  6. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.01
    0.0068472484 = product of:
      0.020541744 = sum of:
        0.011669946 = weight(_text_:in in 3035) [ClassicSimilarity], result of:
          0.011669946 = score(doc=3035,freq=38.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.19652747 = fieldWeight in 3035, product of:
              6.164414 = tf(freq=38.0), with freq of:
                38.0 = termFreq=38.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3035)
        0.008871798 = product of:
          0.017743597 = sum of:
            0.017743597 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
              0.017743597 = score(doc=3035,freq=2.0), product of:
                0.15286934 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.043654136 = queryNorm
                0.116070345 = fieldWeight in 3035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3035)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    A PICTURE is said to be worth a thousand words. That metaphor might be expected to pertain a fortiori in the case of scientific papers, where a figure can brilliantly illuminate an idea that might otherwise be baffling. Papers with figures in them should thus be easier to grasp than those without. They should therefore reach larger audiences and, in turn, be more influential simply by virtue of being more widely read. But are they?
    Content
    Bill Howe and his colleagues at the University of Washington, in Seattle, decided to find out. First, they trained a computer algorithm to distinguish between various sorts of figures-which they defined as diagrams, equations, photographs, plots (such as bar charts and scatter graphs) and tables. They exposed their algorithm to between 400 and 600 images of each of these types of figure until it could distinguish them with an accuracy greater than 90%. Then they set it loose on the more-than-650,000 papers (containing more than 10m figures) stored on PubMed Central, an online archive of biomedical-research articles. To measure each paper's influence, they calculated its article-level Eigenfactor score-a modified version of the PageRank algorithm Google uses to provide the most relevant results for internet searches. Eigenfactor scoring gives a better measure than simply noting the number of times a paper is cited elsewhere, because it weights citations by their influence. A citation in a paper that is itself highly cited is worth more than one in a paper that is not.
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
    Dr Howe and his colleagues do, however, believe that the study of diagrams can result in new insights. A figure showing new metabolic pathways in a cell, for example, may summarise hundreds of experiments. Since illustrations can convey important scientific concepts in this way, they think that browsing through related figures from different papers may help researchers come up with new theories. As Dr Howe puts it, "the unit of scientific currency is closer to the figure than to the paper." With this thought in mind, the team have created a website (viziometrics.org (http://viziometrics.org/) ) where the millions of images sorted by their program can be searched using key words. Their next plan is to extract the information from particular types of scientific figure, to create comprehensive "super" figures: a giant network of all the known chemical processes in a cell for example, or the best-available tree of life. At just one such superfigure per paper, though, the citation records of articles containing such all-embracing diagrams may very well undermine the correlation that prompted their creation in the first place. Call it the ultimate marriage of chart and science.
  7. Wachsmann, L.: Entwurf und Implementierung eines Modells zur Visualisierung von OWL-Properties als Protégé-PlugIn mit Layoutalgorithmen aus Graphviz (2008) 0.00
    0.0044675306 = product of:
      0.026805183 = sum of:
        0.026805183 = weight(_text_:und in 4173) [ClassicSimilarity], result of:
          0.026805183 = score(doc=4173,freq=4.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.27704588 = fieldWeight in 4173, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=4173)
      0.16666667 = coord(1/6)
    
    Abstract
    Diese Diplomarbeit beschäftigt sich mit der Erstellung eines PlugIns für den Ontologie-Editor Protégé. Das PlugIn visualisiert Objekt-Properties als Verknüpfungen zwischen zwei OWL-Klassen. Als Ausgangspunkt für die Entwicklung dient das PlugIn OWLViz, das Vererbungshierarchien von OWL-Klassen als Graphen darstellt. Die Platzierung der Knoten und Kanten des Graphen wird von Algorithmen der Programmbibliothek Graphviz vorgenommen.
  8. Waechter, U.: Visualisierung von Netzwerkstrukturen (2002) 0.00
    0.003159021 = product of:
      0.018954126 = sum of:
        0.018954126 = weight(_text_:und in 1735) [ClassicSimilarity], result of:
          0.018954126 = score(doc=1735,freq=2.0), product of:
            0.09675359 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.043654136 = queryNorm
            0.19590102 = fieldWeight in 1735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0625 = fieldNorm(doc=1735)
      0.16666667 = coord(1/6)
    
    Abstract
    Das WWW entwickelte sich aus dem Bedürfnis, textuelle Information einfach und schnell zu durchforsten. Dabei entstand das Konzept des 'Hyperlinks', womit es möglich ist, Texte miteinander zu verknüpfen. Die Anzahl der Webseiten nahm mit der Verbreitung des WWW rapide zu. Das Problem heutzutage ist: Es gibt prinzipiell jede Art von Information im Internet, doch wie kommt man da dran?
  9. Choi, I.: Visualizations of cross-cultural bibliographic classification : comparative studies of the Korean Decimal Classification and the Dewey Decimal Classification (2017) 0.00
    0.0027826177 = product of:
      0.016695706 = sum of:
        0.016695706 = weight(_text_:in in 3869) [ClassicSimilarity], result of:
          0.016695706 = score(doc=3869,freq=28.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2811637 = fieldWeight in 3869, product of:
              5.2915025 = tf(freq=28.0), with freq of:
                28.0 = termFreq=28.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3869)
      0.16666667 = coord(1/6)
    
    Abstract
    The changes in KO systems induced by sociocultural influences may include those in both classificatory principles and cultural features. The proposed study will examine the Korean Decimal Classification (KDC)'s adaptation of the Dewey Decimal Classification (DDC) by comparing the two systems. This case manifests the sociocultural influences on KOSs in a cross-cultural context. Therefore, the study aims at an in-depth investigation of sociocultural influences by situating a KOS in a cross-cultural environment and examining the dynamics between two classification systems designed to organize information resources in two distinct sociocultural contexts. As a preceding stage of the comparison, the analysis was conducted on the changes that result from the meeting of different sociocultural feature in a descriptive method. The analysis aims to identify variations between the two schemes in comparison of the knowledge structures of the two classifications, in terms of the quantity of class numbers that represent concepts and their relationships in each of the individual main classes. The most effective analytic strategy to show the patterns of the comparison was visualizations of similarities and differences between the two systems. Increasing or decreasing tendencies in the class through various editions were analyzed. Comparing the compositions of the main classes and distributions of concepts in the KDC and DDC discloses the differences in their knowledge structures empirically. This phase of quantitative analysis and visualizing techniques generates empirical evidence leading to interpretation.
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  10. Maaten, L. van den: Learning a parametric embedding by preserving local structure (2009) 0.00
    0.0027546515 = product of:
      0.016527908 = sum of:
        0.016527908 = weight(_text_:in in 3883) [ClassicSimilarity], result of:
          0.016527908 = score(doc=3883,freq=14.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.27833787 = fieldWeight in 3883, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3883)
      0.16666667 = coord(1/6)
    
    Abstract
    The paper presents a new unsupervised dimensionality reduction technique, called parametric t-SNE, that learns a parametric mapping between the high-dimensional data space and the low-dimensional latent space. Parametric t-SNE learns the parametric mapping in such a way that the local structure of the data is preserved as well as possible in the latent space. We evaluate the performance of parametric t-SNE in experiments on three datasets, in which we compare it to the performance of two other unsupervised parametric dimensionality reduction techniques. The results of experiments illustrate the strong performance of parametric t-SNE, in particular, in learning settings in which the dimensionality of the latent space is relatively low.
  11. Wu, Y.; Bai, R.: ¬An event relationship model for knowledge organization and visualization (2017) 0.00
    0.0021859813 = product of:
      0.013115887 = sum of:
        0.013115887 = weight(_text_:in in 3867) [ClassicSimilarity], result of:
          0.013115887 = score(doc=3867,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 3867, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3867)
      0.16666667 = coord(1/6)
    
    Abstract
    An event is a specific occurrence involving participants, which is a typed, n-ary association of entities or other events, each identified as a participant in a specific semantic role in the event (Pyysalo et al. 2012; Linguistic Data Consortium 2005). Event types may vary across domains. Representing relationships between events can facilitate the understanding of knowledge in complex systems (such as economic systems, human body, social systems). In the simplest form, an event can be represented as Entity A <Relation> Entity B. This paper evaluates several knowledge organization and visualization models and tools, such as concept maps (Cmap), topic maps (Ontopia), network analysis models (Gephi), and ontology (Protégé), then proposes an event relationship model that aims to integrate the strengths of these models, and can represent complex knowledge expressed in events and their relationships.
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  12. Maaten, L. van den; Hinton, G.: Visualizing non-metric similarities in multiple maps (2012) 0.00
    0.0021859813 = product of:
      0.013115887 = sum of:
        0.013115887 = weight(_text_:in in 3884) [ClassicSimilarity], result of:
          0.013115887 = score(doc=3884,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.22087781 = fieldWeight in 3884, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3884)
      0.16666667 = coord(1/6)
    
    Abstract
    Techniques for multidimensional scaling visualize objects as points in a low-dimensional metric map. As a result, the visualizations are subject to the fundamental limitations of metric spaces. These limitations prevent multidimensional scaling from faithfully representing non-metric similarity data such as word associations or event co-occurrences. In particular, multidimensional scaling cannot faithfully represent intransitive pairwise similarities in a visualization, and it cannot faithfully visualize "central" objects. In this paper, we present an extension of a recently proposed multidimensional scaling technique called t-SNE. The extension aims to address the problems of traditional multidimensional scaling techniques when these techniques are used to visualize non-metric similarities. The new technique, called multiple maps t-SNE, alleviates these problems by constructing a collection of maps that reveal complementary structure in the similarity data. We apply multiple maps t-SNE to a large data set of word association data and to a data set of NIPS co-authorships, demonstrating its ability to successfully visualize non-metric similarities.
  13. Collins, C.: WordNet explorer : applying visualization principles to lexical semantics (2006) 0.00
    0.0020609628 = product of:
      0.012365777 = sum of:
        0.012365777 = weight(_text_:in in 1288) [ClassicSimilarity], result of:
          0.012365777 = score(doc=1288,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.2082456 = fieldWeight in 1288, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0625 = fieldNorm(doc=1288)
      0.16666667 = coord(1/6)
    
    Abstract
    Interface designs for lexical databases in NLP have suffered from not following design principles developed in the information visualization research community. We present a design paradigm and show it can be used to generate visualizations which maximize the usability and utility ofWordNet. The techniques can be generally applied to other lexical databases used in NLP research.
  14. Barton, P.: ¬A missed opportunity : why the benefits of information visualisation seem still out of sight (2005) 0.00
    0.0019955188 = product of:
      0.011973113 = sum of:
        0.011973113 = weight(_text_:in in 1293) [ClassicSimilarity], result of:
          0.011973113 = score(doc=1293,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 1293, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1293)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper aims to identify what information visualisation is and how in conjunction with the computer it can be used as a tool to expand understanding. It also seeks to explain how information visualisation has been fundamental to the development of the computer from its very early days to Apple's launch of the now ubiquitous W.I.M.P (Windows, Icon, Menu, Program) graphical user interface in 1984. An attempt is also made to question why after many years of progress and development though the late 1960s and 1970s, very little has changed in the way we interact with the data on our computers since the watershed of the Macintosh and in conclusion where the future of information visualisation may lie.
    Content
    Dissertation for MA degree in electronic media, School of Humanities, Oxford Brookes University
  15. Visual thesaurus (2005) 0.00
    0.0019732218 = product of:
      0.01183933 = sum of:
        0.01183933 = weight(_text_:in in 1292) [ClassicSimilarity], result of:
          0.01183933 = score(doc=1292,freq=22.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.19937998 = fieldWeight in 1292, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=1292)
      0.16666667 = coord(1/6)
    
    Abstract
    A visual thesaurus system and method for displaying a selected term in association with its one or more meanings, other words to which it is related, and further relationship information. The results of a search are presented in a directed graph that provides more information than an ordered list. When a user selects one of the results, the display reorganizes around the user's search allowing for further searches, without the interruption of going to additional pages.
    Content
    Traditional print reference guides often have two methods of finding information: an order (alphabetical for dictionaries and encyclopedias, by subject hierarchy in the case of thesauri) and indices (ordered lists, with a more complete listing of words and concepts, which refers back to original content from the main body of the book). A user of such traditional print reference guides who is looking for information will either browse through the ordered information in the main body of the reference book, or scan through the indices to find what is necessary. The advent of the computer allows for much more rapid electronic searches of the same information, and for multiple layers of indices. Users can either search through information by entering a keyword, or users can browse through the information through an outline index, which represents the information contained in the main body of the data. There are two traditional user interfaces for such applications. First, the user may type text into a search field and in response, a list of results is returned to the user. The user then selects a returned entry and may page through the resulting information. Alternatively, the user may choose from a list of words from an index. For example, software thesaurus applications, in which a user attempts to find synonyms, antonyms, homonyms, etc. for a selected word, are usually implemented using the conventional search and presentation techniques discussed above. The presentation of results only allows for a one-dimensional order of data at any one time. In addition, only a limited number of results can be shown at once, and selecting a result inevitably leads to another page-if the result is not satisfactory, the users must search again. Finally, it is difficult to present information about the manner in which the search results are related, or to present quantitative information about the results without causing confusion. Therefore, there exists a need for a multidimensional graphical display of information, in particular with respect to information relating to the meaning of words and their relationships to other words. There further exists a need to present large amounts of information in a way that can be manipulated by the user, without the user losing his place. And there exists a need for more fluid, intuitive and powerful thesaurus functionality that invites the exploration of language.
  16. Beagle, D.: Visualizing keyword distribution across multidisciplinary c-space (2003) 0.00
    0.0018931155 = product of:
      0.011358692 = sum of:
        0.011358692 = weight(_text_:in in 1202) [ClassicSimilarity], result of:
          0.011358692 = score(doc=1202,freq=36.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1912858 = fieldWeight in 1202, product of:
              6.0 = tf(freq=36.0), with freq of:
                36.0 = termFreq=36.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1202)
      0.16666667 = coord(1/6)
    
    Abstract
    The concept of c-space is proposed as a visualization schema relating containers of content to cataloging surrogates and classification structures. Possible applications of keyword vector clusters within c-space could include improved retrieval rates through the use of captioning within visual hierarchies, tracings of semantic bleeding among subclasses, and access to buried knowledge within subject-neutral publication containers. The Scholastica Project is described as one example, following a tradition of research dating back to the 1980's. Preliminary focus group assessment indicates that this type of classification rendering may offer digital library searchers enriched entry strategies and an expanded range of re-entry vocabularies. Those of us who work in traditional libraries typically assume that our systems of classification: Library of Congress Classification (LCC) and Dewey Decimal Classification (DDC), are descriptive rather than prescriptive. In other words, LCC classes and subclasses approximate natural groupings of texts that reflect an underlying order of knowledge, rather than arbitrary categories prescribed by librarians to facilitate efficient shelving. Philosophical support for this assumption has traditionally been found in a number of places, from the archetypal tree of knowledge, to Aristotelian categories, to the concept of discursive formations proposed by Michel Foucault. Gary P. Radford has elegantly described an encounter with Foucault's discursive formations in the traditional library setting: "Just by looking at the titles on the spines, you can see how the books cluster together...You can identify those books that seem to form the heart of the discursive formation and those books that reside on the margins. Moving along the shelves, you see those books that tend to bleed over into other classifications and that straddle multiple discursive formations. You can physically and sensually experience...those points that feel like state borders or national boundaries, those points where one subject ends and another begins, or those magical places where one subject has morphed into another..."
    But what happens to this awareness in a digital library? Can discursive formations be represented in cyberspace, perhaps through diagrams in a visualization interface? And would such a schema be helpful to a digital library user? To approach this question, it is worth taking a moment to reconsider what Radford is looking at. First, he looks at titles to see how the books cluster. To illustrate, I scanned one hundred books on the shelves of a college library under subclass HT 101-395, defined by the LCC subclass caption as Urban groups. The City. Urban sociology. Of the first 100 titles in this sequence, fifty included the word "urban" or variants (e.g. "urbanization"). Another thirty-five used the word "city" or variants. These keywords appear to mark their titles as the heart of this discursive formation. The scattering of titles not using "urban" or "city" used related terms such as "town," "community," or in one case "skyscrapers." So we immediately see some empirical correlation between keywords and classification. But we also see a problem with the commonly used search technique of title-keyword. A student interested in urban studies will want to know about this entire subclass, and may wish to browse every title available therein. A title-keyword search on "urban" will retrieve only half of the titles, while a search on "city" will retrieve just over a third. There will be no overlap, since no titles in this sample contain both words. The only place where both words appear in a common string is in the LCC subclass caption, but captions are not typically indexed in library Online Public Access Catalogs (OPACs). In a traditional library, this problem is mitigated when the student goes to the shelf looking for any one of the books and suddenly discovers a much wider selection than the keyword search had led him to expect. But in a digital library, the issue of non-retrieval can be more problematic, as studies have indicated. Micco and Popp reported that, in a study funded partly by the U.S. Department of Education, 65 of 73 unskilled users searching for material on U.S./Soviet foreign relations found some material but never realized they had missed a large percentage of what was in the database.
  17. Seeliger, F.: ¬A tool for systematic visualization of controlled descriptors and their relation to others as a rich context for a discovery system (2015) 0.00
    0.0018813931 = product of:
      0.011288359 = sum of:
        0.011288359 = weight(_text_:in in 2547) [ClassicSimilarity], result of:
          0.011288359 = score(doc=2547,freq=20.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.19010136 = fieldWeight in 2547, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2547)
      0.16666667 = coord(1/6)
    
    Abstract
    The discovery service (a search engine and service called WILBERT) used at our library at the Technical University of Applied Sciences Wildau (TUAS Wildau) is comprised of more than 8 million items. If we were to record all licensed publications in this tool to a higher level of articles, including their bibliographic records and full texts, we would have a holding estimated at a hundred million documents. A lot of features, such as ranking, autocompletion, multi-faceted classification, refining opportunities reduce the number of hits. However, it is not enough to give intuitive support for a systematic overview of topics related to documents in the library. John Naisbitt once said: "We are drowning in information, but starving for knowledge." This quote is still very true today. Two years ago, we started to develop micro thesauri for MINT topics in order to develop an advanced indexing of the library stock. We use iQvoc as a vocabulary management system to create the thesaurus. It provides an easy-to-use browser interface that builds a SKOS thesaurus in the background. The purpose of this is to integrate the thesauri in WILBERT in order to offer a better subject-related search. This approach especially supports first-year students by giving them the possibility to browse through a hierarchical alignment of a subject, for instance, logistics or computer science, and thereby discover how the terms are related. It also supports the students with an insight into established abbreviations and alternative labels. Students at the TUAS Wildau were involved in the developmental process of the software regarding the interface and functionality of iQvoc. The first steps have been taken and involve the inclusion of 3000 terms in our discovery tool WILBERT.
    Content
    Paper presented at: IFLA WLIC 2015 - Cape Town, South Africa in Session 141 - Science and Technology.
  18. Cao, N.; Sun, J.; Lin, Y.-R.; Gotz, D.; Liu, S.; Qu, H.: FacetAtlas : Multifaceted visualization for rich text corpora (2010) 0.00
    0.001821651 = product of:
      0.010929906 = sum of:
        0.010929906 = weight(_text_:in in 3366) [ClassicSimilarity], result of:
          0.010929906 = score(doc=3366,freq=12.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18406484 = fieldWeight in 3366, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3366)
      0.16666667 = coord(1/6)
    
    Abstract
    Documents in rich text corpora usually contain multiple facets of information. For example, an article about a specific disease often consists of different facets such as symptom, treatment, cause, diagnosis, prognosis, and prevention. Thus, documents may have different relations based on different facets. Powerful search tools have been developed to help users locate lists of individual documents that are most related to specific keywords. However, there is a lack of effective analysis tools that reveal the multifaceted relations of documents within or cross the document clusters. In this paper, we present FacetAtlas, a multifaceted visualization technique for visually analyzing rich text corpora. FacetAtlas combines search technology with advanced visual analytical tools to convey both global and local patterns simultaneously. We describe several unique aspects of FacetAtlas, including (1) node cliques and multifaceted edges, (2) an optimized density map, and (3) automated opacity pattern enhancement for highlighting visual patterns, (4) interactive context switch between facets. In addition, we demonstrate the power of FacetAtlas through a case study that targets patient education in the health care domain. Our evaluation shows the benefits of this work, especially in support of complex multifaceted data analysis.
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  19. Maaten, L. van den: Accelerating t-SNE using Tree-Based Algorithms (2014) 0.00
    0.0018033426 = product of:
      0.010820055 = sum of:
        0.010820055 = weight(_text_:in in 3886) [ClassicSimilarity], result of:
          0.010820055 = score(doc=3886,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1822149 = fieldWeight in 3886, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3886)
      0.16666667 = coord(1/6)
    
    Abstract
    The paper investigates the acceleration of t-SNE-an embedding technique that is commonly used for the visualization of high-dimensional data in scatter plots-using two tree-based algorithms. In particular, the paper develops variants of the Barnes-Hut algorithm and of the dual-tree algorithm that approximate the gradient used for learning t-SNE embeddings in O(N*logN). Our experiments show that the resulting algorithms substantially accelerate t-SNE, and that they make it possible to learn embeddings of data sets with millions of objects. Somewhat counterintuitively, the Barnes-Hut variant of t-SNE appears to outperform the dual-tree variant.
  20. Slavic, A.: Interface to classification : some objectives and options (2006) 0.00
    0.0017848461 = product of:
      0.010709076 = sum of:
        0.010709076 = weight(_text_:in in 2131) [ClassicSimilarity], result of:
          0.010709076 = score(doc=2131,freq=8.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.18034597 = fieldWeight in 2131, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2131)
      0.16666667 = coord(1/6)
    
    Abstract
    This is a preprint to be published in the Extensions & Corrections to the UDC. The paper explains the basic functions of browsing and searching that need to be supported in relation to analytico-synthetic classifications such as Universal Decimal Classification (UDC), irrespective of any specific, real-life implementation. UDC is an example of a semi-faceted system that can be used, for instance, for both post-coordinate searching and hierarchical/facet browsing. The advantages of using a classification for IR, however, depend on the strength of the GUI, which should provide a user-friendly interface to classification browsing and searching. The power of this interface is in supporting visualisation that will 'convert' what is potentially a user-unfriendly indexing language based on symbols, to a subject presentation that is easy to understand, search and navigate. A summary of the basic functions of searching and browsing a classification that may be provided on a user-friendly interface is given and examples of classification browsing interfaces are provided.
    Content
    To be published in the Extensions & Corrections to the UDC. 28(2006).