Search (724 results, page 36 of 37)

  • × type_ss:"el"
  1. Barrierefreies E-Government : Leitfaden für Entscheidungsträger, Grafiker und Programmierer (2005) 0.00
    0.002178703 = product of:
      0.010893514 = sum of:
        0.010893514 = product of:
          0.021787029 = sum of:
            0.021787029 = weight(_text_:web in 4881) [ClassicSimilarity], result of:
              0.021787029 = score(doc=4881,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.14422815 = fieldWeight in 4881, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4881)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: Information - Wissenschaft und Praxis 56(2005) H.8, S.459 (W. Schweibenz): "Der Leitfaden ist Teil des Handbuchs für sicheres E-Govemment, das vom Bundesamt für Sicherheit in der Informationstechnik herausgegeben wird und als Nachschlagewerk und zentrale Informationsbörse mit Empfehlungscharakter konzipiert ist. Die Publikation stellt in sechs Abschnitten alle wichtigen Aspekte für ein barrierefreies EGovernment dar, die inhaltlich auch auf private Web-Angebote übertragen werden können. Ein besonderes Anliegen des Leitfadens ist es, den Entscheidungsträgern die Notwendigkeit eines barrierefreien Internets zu erklären. Dies geschieht im ersten Abschnitt, in dem ausgehend von der mangelhafte Benutzungsfreundlichkeit des Internets im Allgemeinen die Bedürfnisse behinderter Menschen im Besonderen beschrieben werden. In anschaulicher Weise mit Beispielen und Bildern werden die Probleme folgender Benutzergruppen dargestellt: - sehbehinderte und blinde Menschen, - hörgeschädigte und gehörlose Menschen, - kognitiv eingeschränkte und konzentrationsschwache Menschen, - Menschen mit Epilepsie, - manuell-motorisch eingeschränkte Menschen. Dies kann Lesern helfen, sich die Probleme von Menschen mit Behinderungen zu vergegenwärtigen, bevor sie im zweiten Abschnitt auf zehn Seiten mit dem deutschen Gesetze und Richtlinien konfrontiert werden. Der Abschnitt 3 Anleitung zur Gestaltung barrierefreier Internet-Seiten gibt Programmierern und Designem konkrete Hinweise welche Techniken in HTML und CSS wie eingesetzt werden müssen, um Barrierefreiheit zu erreichen. Dies reicht von Fragen der Wahmehmbarkeit (Textäquivalente für Audio- und visuelle Inhalte, Schrift und Farbe) über generelle Aspekte der Bedienbarkeit (Orientierung und Navigation, Frames, eingebettete Benutzerschnittstellen, Formulare) und der allgemeinen Verständlichkeit (Sprache, Abkürzungen, Akronyme) bis zur Einhaltung von Standards (W3C unterstützte Formate, Einhaltung von Markup-Standards, Rückwärtskompatibilität, Geräteunabhängigkeit, Kompatibilität mit assistiven Technologien). Im Abschnitt 4 wird die Kommunikation im Internet betrachtet, wobei vor allem auf Fragen des E-Mail-Verkehrs und der Sicherheit eingegangen wird, Aspekte die für alle Internet-Benutzer interessant sind. Im Abschnitt 5 wird dargestellt, wie Internet-Seiten auf Barrierefreiheit geprüft werden können. Neben Testmethoden technischer Art (Evaluierung durch verschiedene Browser und Prüfprogramme) und mit behinderten Benutzern wird auch auf die Frage der Oualitätssiegel für Barrierefreiheit eingegangen und existierende Testsymbole werden vorgestellt. Ein sechster Abschnitt mit Links und Literatur rundet den Leitfaden ab und verweist interessierte Leser weiter."
  2. Cohen, D.J.: From Babel to knowledge : data mining large digital collections (2006) 0.00
    0.002178703 = product of:
      0.010893514 = sum of:
        0.010893514 = product of:
          0.021787029 = sum of:
            0.021787029 = weight(_text_:web in 1178) [ClassicSimilarity], result of:
              0.021787029 = score(doc=1178,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.14422815 = fieldWeight in 1178, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1178)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    In Jorge Luis Borges's curious short story The Library of Babel, the narrator describes an endless collection of books stored from floor to ceiling in a labyrinth of countless hexagonal rooms. The pages of the library's books seem to contain random sequences of letters and spaces; occasionally a few intelligible words emerge in the sea of paper and ink. Nevertheless, readers diligently, and exasperatingly, scan the shelves for coherent passages. The narrator himself has wandered numerous rooms in search of enlightenment, but with resignation he simply awaits his death and burial - which Borges explains (with signature dark humor) consists of being tossed unceremoniously over the library's banister. Borges's nightmare, of course, is a cursed vision of the research methods of disciplines such as literature, history, and philosophy, where the careful reading of books, one after the other, is supposed to lead inexorably to knowledge and understanding. Computer scientists would approach Borges's library far differently. Employing the information theory that forms the basis for search engines and other computerized techniques for assessing in one fell swoop large masses of documents, they would quickly realize the collection's incoherence though sampling and statistical methods - and wisely start looking for the library's exit. These computational methods, which allow us to find patterns, determine relationships, categorize documents, and extract information from massive corpuses, will form the basis for new tools for research in the humanities and other disciplines in the coming decade. For the past three years I have been experimenting with how to provide such end-user tools - that is, tools that harness the power of vast electronic collections while hiding much of their complicated technical plumbing. In particular, I have made extensive use of the application programming interfaces (APIs) the leading search engines provide for programmers to query their databases directly (from server to server without using their web interfaces). In addition, I have explored how one might extract information from large digital collections, from the well-curated lexicographic database WordNet to the democratic (and poorly curated) online reference work Wikipedia. While processing these digital corpuses is currently an imperfect science, even now useful tools can be created by combining various collections and methods for searching and analyzing them. And more importantly, these nascent services suggest a future in which information can be gleaned from, and sense can be made out of, even imperfect digital libraries of enormous scale. A brief examination of two approaches to data mining large digital collections hints at this future, while also providing some lessons about how to get there.
  3. Baker, T.; Dekkers, M.: Identifying metadata elements with URIs : The CORES resolution (2003) 0.00
    0.002178703 = product of:
      0.010893514 = sum of:
        0.010893514 = product of:
          0.021787029 = sum of:
            0.021787029 = weight(_text_:web in 1199) [ClassicSimilarity], result of:
              0.021787029 = score(doc=1199,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.14422815 = fieldWeight in 1199, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1199)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    On 18 November 2002, at a meeting organised by the CORES Project (Information Society Technologies Programme, European Union), several organisations regarded as maintenance authorities for metadata elements achieved consensus on a resolution to assign Uniform Resource Identifiers (URIs) to metadata elements as a useful first step towards the development of mapping infrastructures and interoperability services. The signatories of the CORES Resolution agreed to promote this consensus in their communities and beyond and to implement an action plan in the following six months. Six months having passed, the maintainers of GILS, ONIX, MARC 21, CERIF, DOI, IEEE/LOM, and Dublin Core report on their implementations of the resolution and highlight issues of relevance to establishing good-practice conventions for declaring, identifying, and maintaining metadata elements more generally. In June 2003, the resolution was also endorsed by the maintainers of UNIMARC. The "Resolution on Metadata Element Identifiers", or CORES Resolution, is an agreement among the maintenance organisations for several major metadata standards - GILS, ONIX, MARC 21, UNIMARC, CERIF, DOI®, IEEE/LOM, and Dublin Core - to identify their metadata elements using Uniform Resource Identifiers (URIs). The Uniform Resource Identifier, defined in the IETF RFC 2396 as "a compact string of characters for identifying an abstract or physical resource", has been promoted for use as a universal form of identification by the World Wide Web Consortium. The CORES Resolution, formulated at a meeting organised by the European project CORES in November 2002, included a commitment to publicise the consensus statement to a wider audience of metadata standards initiatives and to implement key points of the agreement within the following six months - specifically, to define URI assignment mechanisms, assign URIs to elements, and formulate policies for the persistence of those URIs. This article marks the passage of six months by reporting on progress made in implementing this common action plan. After presenting the text of the CORES Resolution and its three "clarifications", the article summarises the position of each signatory organisation towards assigning URIs to its metadata elements, noting any practical or strategic problems that may have emerged. These progress reports were based on input from Thomas Baker, José Borbinha, Eliot Christian, Erik Duval, Keith Jeffery, Rebecca Guenther, and Norman Paskin. The article closes with a few general observations about these first steps towards the clarification of shared conventions for the identification of metadata elements and perhaps, one can hope, towards the ultimate goal of improving interoperability among a diversity of metadata communities.
  4. Hill, L.: New Protocols for Gazetteer and Thesaurus Services (2002) 0.00
    0.002178703 = product of:
      0.010893514 = sum of:
        0.010893514 = product of:
          0.021787029 = sum of:
            0.021787029 = weight(_text_:web in 1206) [ClassicSimilarity], result of:
              0.021787029 = score(doc=1206,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.14422815 = fieldWeight in 1206, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1206)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The Alexandria Digital Library Project announces the online publication of two protocols to support querying and response interactions using distributed services: one for gazetteers and one for thesauri. These protocols have been developed for our own purposes and also to support the general interoperability of gazetteers and thesauri on the web. See <http://www.alexandria.ucsb.edu/~gjanee/gazetteer/> and <http://www.alexandria.ucsb.edu/~gjanee/thesaurus/>. For the gazetteer protocol, we have provided a page of test forms that can be used to experiment with the operational functions of the protocol in accessing two gazetteers: the ADL Gazetteer and the ESRI Gazetteer (ESRI has participated in the development of the gazetteer protocol). We are in the process of developing a thesaurus server and a simple client to demonstrate the use of the thesaurus protocol. We are soliciting comments on both protocols. Please remember that we are seeking protocols that are essentially "simple" and easy to implement and that support basic operations - they should not duplicate all of the functions of specialized gazetteer and thesaurus interfaces. We continue to discuss ways of handling various issues and to further develop the protocols. For the thesaurus protocol, outstanding issues include the treatment of multilingual thesauri and the degree to which the language attribute should be supported; whether the Scope Note element should be changed to a repeatable Note element; the best way to handle the hierarchical report for multi-hierarchies where portions of the hierarchy are repeated; and whether support for searching by term identifiers is redundant and unnecessary given that the terms themselves are unique within a thesaurus. For the gazetteer protocol, we continue to work on validation of query and report XML documents and on implementing the part of the protocol designed to support the submission of new entries to a gazetteer. We would like to encourage open discussion of these protocols through the NKOS discussion list (see the NKOS webpage at <http://nkos.slis.kent.edu/>) and the CGGR-L discussion list that focuses on gazetteer development (see ADL Gazetteer Development page at <http://www.alexandria.ucsb.edu/gazetteer>).
  5. Duval, E.; Hodgins, W.; Sutton, S.; Weibel, S.L.: Metadata principles and practicalities (2002) 0.00
    0.002178703 = product of:
      0.010893514 = sum of:
        0.010893514 = product of:
          0.021787029 = sum of:
            0.021787029 = weight(_text_:web in 1208) [ClassicSimilarity], result of:
              0.021787029 = score(doc=1208,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.14422815 = fieldWeight in 1208, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1208)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    For those of us still struggling with basic concepts regarding metadata in this brave new world in which cataloging means much more than MARC, an article like this is welcome indeed. In this 30.000-foot overview of the metadata landscape, broad issues such as modularity, namespaces, extensibility, refinement, and multilingualism are discussed. In addition, "practicalities" like application profiles, syntax and semantics, metadata registries, and automated generation of metadata are explained. Although this piece is not exhaustive of high-level metadata issues, it is nonetheless a useful description of some of the most important issues surrounding metadata creation and use. The rapid changes in the means of information access occasioned by the emergence of the World Wide Web have spawned an upheaval in the means of describing and managing information resources. Metadata is a primary tool in this work, and an important link in the value chain of knowledge economies. Yet there is much confusion about how metadata should be integrated into information systems. How is it to be created or extended? Who will manage it? How can it be used and exchanged? Whence comes its authority? Can different metadata standards be used together in a given environment? These and related questions motivate this paper. The authors hope to make explicit the strong foundations of agreement shared by two prominent metadata Initiatives: the Dublin Core Metadata Initiative (DCMI) and the Institute for Electrical and Electronics Engineers (IEEE) Learning Object Metadata (LOM) Working Group. This agreement emerged from a joint metadata taskforce meeting in Ottawa in August, 2001. By elucidating shared principles and practicalities of metadata, we hope to raise the level of understanding among our respective (and shared) constituents, so that all stakeholders can move forward more decisively to address their respective problems. The ideas in this paper are divided into two categories. Principles are those concepts judged to be common to all domains of metadata and which might inform the design of any metadata schema or application. Practicalities are the rules of thumb, constraints, and infrastructure issues that emerge from bringing theory into practice in the form of useful and sustainable systems.
  6. Paskin, N.: DOI: current status and outlook (1999) 0.00
    0.002178703 = product of:
      0.010893514 = sum of:
        0.010893514 = product of:
          0.021787029 = sum of:
            0.021787029 = weight(_text_:web in 1245) [ClassicSimilarity], result of:
              0.021787029 = score(doc=1245,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.14422815 = fieldWeight in 1245, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1245)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Over the past few months the International DOI Foundation (IDF) has produced a number of discussion papers and other materials about the Digital Object Identifier (DOIsm) initiative. They are all available at the DOI web site, including a brief summary of the DOI origins and purpose. The aim of the present paper is to update those papers, reflecting recent progress, and to provide a summary of the current position and context of the DOI. Although much of the material presented here is the result of a consensus by the organisations forming the International DOI Foundation, some of the points discuss work in progress. The paper describes the origin of the DOI as a persistent identifier for managing copyrighted materials and its development under the non-profit International DOI Foundation into a system providing identifiers of intellectual property with a framework for open applications to be built using them. Persistent identification implementations consistent with URN specifications have up to now been hindered by lack of widespread availability of resolution mechanisms, content typology consensus, and sufficiently flexible infrastructure; DOI attempts to overcome these obstacles. Resolution of the DOI uses the Handle System®, which offers the necessary functionality for open applications. The aim of the International DOI Foundation is to promote widespread applications of the DOI, which it is doing by pioneering some early implementations and by providing an extensible framework to ensure interoperability of future DOI uses. Applications of the DOI will require an interoperable scheme of declared metadata with each DOI; the basis of the DOI metadata scheme is a minimal "kernel" of elements supplemented by additional application-specific elements, under an umbrella data model (derived from the INDECS analysis) that promotes convergence of different application metadata sets. The IDF intends to require declaration of only a minimal set of metadata, sufficient to enable unambiguous look-up of a DOI, but this must be capable of extension by others to create open applications.
  7. Leresche, F.: Libraries and archives : sharing standards to facilitate access to cultural heritage (2008) 0.00
    0.002178703 = product of:
      0.010893514 = sum of:
        0.010893514 = product of:
          0.021787029 = sum of:
            0.021787029 = weight(_text_:web in 1425) [ClassicSimilarity], result of:
              0.021787029 = score(doc=1425,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.14422815 = fieldWeight in 1425, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1425)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    This presentation shares the French experience of collaboration between archivists and librarians, led by working groups with the Association française de normalisation (AFNOR). With the arrival of the Web, the various heritage institutions are increasingly aware of their areas of commonality and the need for interoperability between their catalogues. This is particularly true for archives and libraries, which have developed standards for meeting their specific needs Regarding document description, but which are now seeking to establish a dialogue for defining a coherent set of standards to which professionals in both communities can refer. After discussing the characteristics of the collections held respectively in archives and libraries, this presentation will draw a portrait of the standards established by the two professional communities in the following areas: - description of documents - access points in descriptions and authority records - description of functions - identification of conservation institutions and collections It is concluded from this study that the standards developed by libraries on the one hand and by archives on the other are most often complementary and that each professional community is being driven to use the standards developed by the other, or would at least profit from doing so. A dialogue between the two professions is seen today as a necessity for fostering the compatibility and interoperability of standards and documentary tools. Despite this recognition of the need for collaboration, the development of standards is still largely a compartmentalized process, and the fact that normative work is conducted within professional associations is a contributing factor. The French experience shows, however, that it is possible to create working groups where archivists and librarians unite and develop a comprehensive view of the standards and initiatives conducted by each, with the goal of articulating them as best they can for the purpose of interoperability, yet respecting the specific requirements of each.
  8. Neubauer, W.: ¬The Knowledge portal or the vision of easy access to information (2009) 0.00
    0.002178703 = product of:
      0.010893514 = sum of:
        0.010893514 = product of:
          0.021787029 = sum of:
            0.021787029 = weight(_text_:web in 2812) [ClassicSimilarity], result of:
              0.021787029 = score(doc=2812,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.14422815 = fieldWeight in 2812, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2812)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    From a quantitative and qualitative point of view the ETH Library is offering its users an extensive choice of information services. In this respect all researchers, all scientists and also all students have access to nearly all relevant information. This is one side of the coin. On the other hand, this broad, but also heterogeneous bundle of information sources has disadvantages, which should not be underestimated: The more information services and information channels you have, the more complex is it to find what you want to get for your scientific work. A portal-like integration of all the different information resources is still missing. The vision, the main goal of the project "Knowledge Portal" is, to develop a central access system in terms of a "single-point-of-access" for all electronic information services. This means, that all these sources - from the library's catalogue and the fulltext inhouse applications to external, licensed sources - should be accessible via one central Web service. Although the primary target group for this vision is the science community of the ETH Zurich, the interested public should also be taken into account, for the library has also a nation-wide responsibility.The general idea to launch a complex project like that comes from a survey the library did one and a half years ago. We asked a defined sample of scientists what they expected to get from their library and one constant answer was, that they wanted to have one point of access to all the electronic library services and besides this, the search processes should be as simple as possible. We accepted this demand as an order to develop a "single-point-of-access" to all electronic services the library provides. The presentation gives an overview about the general idea of the project and describes the current status.
  9. Brown, T.B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.M.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; Amodei, D.: Language models are few-shot learners (2020) 0.00
    0.002178703 = product of:
      0.010893514 = sum of:
        0.010893514 = product of:
          0.021787029 = sum of:
            0.021787029 = weight(_text_:web in 872) [ClassicSimilarity], result of:
              0.021787029 = score(doc=872,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.14422815 = fieldWeight in 872, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=872)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.
  10. Oberhauser, O.: Card-Image Public Access Catalogues (CIPACs) : a critical consideration of a cost-effective alternative to full retrospective catalogue conversion (2002) 0.00
    0.0019063652 = product of:
      0.009531826 = sum of:
        0.009531826 = product of:
          0.019063652 = sum of:
            0.019063652 = weight(_text_:web in 1703) [ClassicSimilarity], result of:
              0.019063652 = score(doc=1703,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.12619963 = fieldWeight in 1703, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1703)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Footnote
    Rez. in: ABI-Technik 21(2002) H.3, S.292 (E. Pietzsch): "Otto C. Oberhauser hat mit seiner Diplomarbeit eine beeindruckende Analyse digitalisierter Zettelkataloge (CIPACs) vorgelegt. Die Arbeit wartet mit einer Fülle von Daten und Statistiken auf, wie sie bislang nicht vorgelegen haben. BibliothekarInnen, die sich mit der Digitalisierung von Katalogen tragen, finden darin eine einzigartige Vorlage zur Entscheidungsfindung. Nach einem einführenden Kapitel bringt Oberhauser zunächst einen Überblick über eine Auswahl weltweit verfügbarer CIPACs, deren Indexierungsmethode (Binäre Suche, partielle Indexierung, Suche in OCR-Daten) und stellt vergleichende Betrachtungen über geographische Verteilung, Größe, Software, Navigation und andere Eigenschaften an. Anschließend beschreibt und analysiert er Implementierungsprobleme, beginnend bei Gründen, die zur Digitalisierung führen können: Kosten, Umsetzungsdauer, Zugriffsverbesserung, Stellplatzersparnis. Er fährt fort mit technischen Aspekten wie Scannen und Qualitätskontrolle, Image Standards, OCR, manueller Nacharbeit, Servertechnologie. Dabei geht er auch auf die eher hinderlichen Eigenschaften älterer Kataloge ein sowie auf die Präsentation im Web und die Anbindung an vorhandene Opacs. Einem wichtigen Aspekt, nämlich der Beurteilung durch die wichtigste Zielgruppe, die BibliotheksbenutzerInnen, hat Oberhauser eine eigene Feldforschung gewidmet, deren Ergebnisse er im letzten Kapitel eingehend analysiert. Anhänge über die Art der Datenerhebung und Einzelbeschreibung vieler Kataloge runden die Arbeit ab. Insgesamt kann ich die Arbeit nur als die eindrucksvollste Sammlung von Daten, Statistiken und Analysen zum Thema CIPACs bezeichnen, die mir bislang begegnet ist. Auf einen schön herausgearbeiteten Einzelaspekt, nämlich die weitgehende Zersplitterung bei den eingesetzten Softwaresystemen, will ich besonders eingehen: Derzeit können wir grob zwischen Komplettlösungen (eine beauftragte Firma führt als Generalunternehmung sämtliche Aufgaben von der Digitalisierung bis zur Ablieferung der fertigen Anwendung aus) und geteilten Lösungen (die Digitalisierung wird getrennt von der Indexierung und der Softwareerstellung vergeben bzw. im eigenen Hause vorgenommen) unterscheiden. Letztere setzen ein Projektmanagement im Hause voraus. Gerade die Softwareerstellung im eigenen Haus aber kann zu Lösungen führen, die kommerziellen Angeboten keineswegs nachstehen. Schade ist nur, daß die vielfältigen Eigenentwicklungen bislang noch nicht zu Initiativen geführt haben, die, ähnlich wie bei Public Domain Software, eine "optimale", kostengünstige und weithin akzeptierte Softwarelösung zum Ziel haben. Einige kritische Anmerkungen sollen dennoch nicht unerwähnt bleiben. Beispielsweise fehlt eine Differenzierung zwischen "Reiterkarten"-Systemen, d.h. solchen mit Indexierung jeder 20. oder 50. Karte, und Systemen mit vollständiger Indexierung sämtlicher Kartenköpfe, führt doch diese weitreichende Designentscheidung zu erheblichen Kostenverschiebungen zwischen Katalogerstellung und späterer Benutzung. Auch bei den statistischen Auswertungen der Feldforschung hätte ich mir eine feinere Differenzierung nach Typ des CIPAC oder nach Bibliothek gewünscht. So haben beispielsweise mehr als die Hälfte der befragten BenutzerInnen angegeben, die Bedienung des CIPAC sei zunächst schwer verständlich oder seine Benutzung sei zeitaufwendig gewesen. Offen beibt jedoch, ob es Unterschiede zwischen den verschiedenen Realisierungstypen gibt.
  11. Mayr, P.; Petras, V.; Walter, A.-K.: Results from a German terminology mapping effort : intra- and interdisciplinary cross-concordances between controlled vocabularies (2007) 0.00
    0.0019063652 = product of:
      0.009531826 = sum of:
        0.009531826 = product of:
          0.019063652 = sum of:
            0.019063652 = weight(_text_:web in 542) [ClassicSimilarity], result of:
              0.019063652 = score(doc=542,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.12619963 = fieldWeight in 542, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=542)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    In 2004, the German Federal Ministry for Education and Research funded a major terminology mapping initiative at the GESIS Social Science Information Centre in Bonn (GESIS-IZ), which will find its conclusion this year. The task of this terminology mapping initiative was to organize, create and manage 'crossconcordances' between major controlled vocabularies (thesauri, classification systems, subject heading lists) centred around the social sciences but quickly extending to other subject areas. Cross-concordances are intellectually (manually) created crosswalks that determine equivalence, hierarchy, and association relations between terms from two controlled vocabularies. Most vocabularies have been related bilaterally, that is, there is a cross-concordance relating terms from vocabulary A to vocabulary B as well as a cross-concordance relating terms from vocabulary B to vocabulary A (bilateral relations are not necessarily symmetrical). Till August 2007, 24 controlled vocabularies from 11 disciplines will be connected with vocabulary sizes ranging from 2,000 - 17,000 terms per vocabulary. To date more than 260,000 relations are generated. A database including all vocabularies and cross-concordances was built and a 'heterogeneity service' developed, a web service, which makes the cross-concordances available for other applications. Many cross-concordances are already implemented and utilized for the German Social Science Information Portal Sowiport (www.sowiport.de), which searches bibliographical and other information resources (incl. 13 databases with 10 different vocabularies and ca. 2.5 million references).
  12. Scientometrics pioneer Eugene Garfield dies : Eugene Garfield, founder of the Institute for Scientific Information and The Scientist, has passed away at age 91 (2017) 0.00
    0.0019063652 = product of:
      0.009531826 = sum of:
        0.009531826 = product of:
          0.019063652 = sum of:
            0.019063652 = weight(_text_:web in 3460) [ClassicSimilarity], result of:
              0.019063652 = score(doc=3460,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.12619963 = fieldWeight in 3460, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3460)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    Vgl. auch Open Password, Nr.167 vom 01.03.2017 :"Eugene Garfield, Begründer und Pionier der Zitationsindexierung und der Ziationsanalyse, ohne den die Informationswissenschaft heute anders aussähe, ist im Alter von 91 Jahren gestorben. Er hinterlässt Frau, drei Söhne, eine Tochter, eine Stieftochter, zwei Enkeltöchter und zwei Großelternkinder. Garfield machte seinen ersten Abschluss als Bachelor in Chemie an der Columbia University in New York City im Jahre 1949. 1954 sattelte er einen Abschluss in Bibliothekswissenschaft drauf. 1961 sollte er im Fach strukturelle Linguistik promovieren. Als Chemie-Student war er nach eigenen Angaben weder besonders gut noch besonders glücklich. Sein "Erweckungserlebnis" hatte er auf einer Tagung der American Chemical Society, als er entdeckte, dass sich mit der Suche nach Literatur womöglich ein Lebensunterhalt bestreiten lasse. "So I went to the Chairman of the meeting and said: "How do you get a job in this racket?" Ab 1955 war Garfield zunächst als Berater für pharmazeutische Unternehmen tätig. Dort spezialisierte er sich auf Fachinformationen, indem er Inhalte relevanter Fachzeitschriften erarbeitete. 1955 schlug er in "Science" seine bahnbrechende Idee vor, Zitationen wissenschaftlicher Veröffentlichungen systematisch zu erfassen und Zusammenhänge zwischen Zitaten deutlich zu machen. 1960 gründete Garfield das Institute für Scientific Informationen, dessen CEO er bis 1992 blieb. 1964 brachte er den Scientific Information Index heraus. Weitere Maßgrößen wie der Social Science Index (ab 1973), der Arts and Humanities Citation Index (ab 1978) und der Journal Citation Index folgten. Diese Verzeichnisse wurden in dem "Web of Science" zusammengefasst und als Datenbank elektronisch zugänglich gemacht. Damit wurde es den Forschern ermöglich, die für sie relevante Literatur "at their fingertips" zu finden und sich in ihr zurechtzufinden. Darüber hinaus wurde es mit Hilfe der Rankings von Garfields Messgrößen möglich, die relative wissenschaftliche Bedeutung wissenschaftlicher Beiträge, Autoren, wissenschaftlicher Einrichtungen, Regionen und Länder zu messen.
  13. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.00
    0.001881392 = product of:
      0.00940696 = sum of:
        0.00940696 = product of:
          0.01881392 = sum of:
            0.01881392 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.01881392 = score(doc=1184,freq=2.0), product of:
                0.16209066 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04628742 = queryNorm
                0.116070345 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    26.12.2011 14:08:22
  14. Graphic details : a scientific study of the importance of diagrams to science (2016) 0.00
    0.001881392 = product of:
      0.00940696 = sum of:
        0.00940696 = product of:
          0.01881392 = sum of:
            0.01881392 = weight(_text_:22 in 3035) [ClassicSimilarity], result of:
              0.01881392 = score(doc=3035,freq=2.0), product of:
                0.16209066 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04628742 = queryNorm
                0.116070345 = fieldWeight in 3035, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3035)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    As the team describe in a paper posted (http://arxiv.org/abs/1605.04951) on arXiv, they found that figures did indeed matter-but not all in the same way. An average paper in PubMed Central has about one diagram for every three pages and gets 1.67 citations. Papers with more diagrams per page and, to a lesser extent, plots per page tended to be more influential (on average, a paper accrued two more citations for every extra diagram per page, and one more for every extra plot per page). By contrast, including photographs and equations seemed to decrease the chances of a paper being cited by others. That agrees with a study from 2012, whose authors counted (by hand) the number of mathematical expressions in over 600 biology papers and found that each additional equation per page reduced the number of citations a paper received by 22%. This does not mean that researchers should rush to include more diagrams in their next paper. Dr Howe has not shown what is behind the effect, which may merely be one of correlation, rather than causation. It could, for example, be that papers with lots of diagrams tend to be those that illustrate new concepts, and thus start a whole new field of inquiry. Such papers will certainly be cited a lot. On the other hand, the presence of equations really might reduce citations. Biologists (as are most of those who write and read the papers in PubMed Central) are notoriously mathsaverse. If that is the case, looking in a physics archive would probably produce a different result.
  15. DeSilva, J.M.; Traniello, J.F.A.; Claxton, A.G.; Fannin, L.D.: When and why did human brains decrease in size? : a new change-point analysis and insights from brain evolution in ants (2021) 0.00
    0.001881392 = product of:
      0.00940696 = sum of:
        0.00940696 = product of:
          0.01881392 = sum of:
            0.01881392 = weight(_text_:22 in 405) [ClassicSimilarity], result of:
              0.01881392 = score(doc=405,freq=2.0), product of:
                0.16209066 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04628742 = queryNorm
                0.116070345 = fieldWeight in 405, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=405)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Frontiers in ecology and evolution, 22 October 2021 [https://www.frontiersin.org/articles/10.3389/fevo.2021.742639/full]
  16. Maurer, H.; Balke, T.; Kappe,, F.; Kulathuramaiyer, N.; Weber, S.; Zaka, B.: Report on dangers and opportunities posed by large search engines, particularly Google (2007) 0.00
    0.0016340271 = product of:
      0.008170135 = sum of:
        0.008170135 = product of:
          0.01634027 = sum of:
            0.01634027 = weight(_text_:web in 754) [ClassicSimilarity], result of:
              0.01634027 = score(doc=754,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.108171105 = fieldWeight in 754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=754)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The aim of our investigation was to discuss exactly what is formulated in the title. This will of course constitute a main part of this write-up. However, in the process of investigations it also became clear that the focus has to be extended, not to just cover Google and search engines in an isolated fashion, but to also cover other Web 2.0 related phenomena, particularly Wikipedia, Blogs, and other related community efforts. It was the purpose of our investigation to demonstrate: - Plagiarism and IPR violation are serious concerns in academia and in the commercial world - Current techniques to fight both are rudimentary, yet could be improved by a concentrated initiative - One reason why the fight is difficult is the dominance of Google as THE major search engine and that Google is unwilling to cooperate - The monopolistic behaviour of Google is also threatening how we see the world, how we as individuals are seen (complete loss of privacy) and is threatening even world economy (!) In our proposal we did present a list of typical sections that would be covered at varying depth, with the possible replacement of one or the other by items that would emerge as still more important.
  17. Lavoie, B.; Henry, G.; Dempsey, L.: ¬A service framework for libraries (2006) 0.00
    0.0016340271 = product of:
      0.008170135 = sum of:
        0.008170135 = product of:
          0.01634027 = sum of:
            0.01634027 = weight(_text_:web in 1175) [ClassicSimilarity], result of:
              0.01634027 = score(doc=1175,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.108171105 = fieldWeight in 1175, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1175)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Much progress has been made in aligning library services with changing (and increasingly digital and networked) research and learning environments. At times, however, this progress has been uneven, fragmented, and reactive. As libraries continue to engage with an ever-shifting information landscape, it is apparent that their efforts would be facilitated by a shared view of how library services should be organized and surfaced in these new settings and contexts. Recent discussions in a variety of areas underscore this point: * Institutional repositories: what is the role of the library in collecting, managing, and preserving institutional scholarly output, and what services should be offered to faculty and students in this regard? * Metasearch: how can the fragmented pieces of library collections be brought together to simplify and improve the search experience of the user? * E-learning and course management systems: how can library services be lifted out of traditional library environments and inserted into the emerging workflows of "e-scholars" and "e-learners"? * Exposing library collections to search engines: how can libraries surface their collections in the general Web search environment, and how can users be provisioned with better tools to navigate an increasingly complex information landscape? In each case, there is as yet no shared picture of the library to bring to bear on these questions; there is little consensus on the specific library services that should be expected in these environments, how they should be organized, and how they should be presented.
  18. Dushay, N.: Visualizing bibliographic metadata : a virtual (book) spine viewer (2004) 0.00
    0.0016340271 = product of:
      0.008170135 = sum of:
        0.008170135 = product of:
          0.01634027 = sum of:
            0.01634027 = weight(_text_:web in 1197) [ClassicSimilarity], result of:
              0.01634027 = score(doc=1197,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.108171105 = fieldWeight in 1197, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1197)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    User interfaces for digital information discovery often require users to click around and read a lot of text in order to find the text they want to read-a process that is often frustrating and tedious. This is exacerbated because of the limited amount of text that can be displayed on a computer screen. To improve the user experience of computer mediated information discovery, information visualization techniques are applied to the digital library context, while retaining traditional information organization concepts. In this article, the "virtual (book) spine" and the virtual spine viewer are introduced. The virtual spine viewer is an application which allows users to visually explore large information spaces or collections while also allowing users to hone in on individual resources of interest. The virtual spine viewer introduced here is an alpha prototype, presented to promote discussion and further work. Information discovery changed radically with the introduction of computerized library access catalogs, the World Wide Web and its search engines, and online bookstores. Yet few instances of these technologies provide a user experience analogous to walking among well-organized, well-stocked bookshelves-which many people find useful as well as pleasurable. To put it another way, many of us have heard or voiced complaints about the paucity of "online browsing"-but what does this really mean? In traditional information spaces such as libraries, often we can move freely among the books and other resources. When we walk among organized, labeled bookshelves, we get a sense of the information space-we take in clues, perhaps unconsciously, as to the scope of the collection, the currency of resources, the frequency of their use, etc. We also enjoy unexpected discoveries such as finding an interesting resource because library staff deliberately located it near similar resources, or because it was miss-shelved, or because we saw it on a bookshelf on the way to the water fountain.
  19. Beuth, P.: Voyeure gesucht : Böse Nachbarn (2008) 0.00
    0.0016340271 = product of:
      0.008170135 = sum of:
        0.008170135 = product of:
          0.01634027 = sum of:
            0.01634027 = weight(_text_:web in 2226) [ClassicSimilarity], result of:
              0.01634027 = score(doc=2226,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.108171105 = fieldWeight in 2226, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2226)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    Man kann das Schadenfreude nennen - oder auch einfach nur Unterhaltung. Den unbestritten gibt es unter den Web-Denunzianten talentierte Entertainer. Das berühmteste Beispiel ist derzeit die Britin Tricia Walsh. Sie stellte ein Video über ihren Scheidungskrieg auf Youtube, in dem sie peinliche Details über ihren Gatten ausplaudert. Sie verrät aller Welt, dass sie wegen des Bluthochdrucks ihres Mannes keinen Sex mit ihm hatte, er aber Kondome, Viagra und Pornos in einer Schublade hortete. Vier Millionen User haben das Video schon gesehen. Für Psychologin Schwab ein typisches Beispiel für eskalierende Konflikte: "In einem bestimmten Stadium will man dem anderen unbedingt schaden, auch wenn man dabei selbst Schaden nimmt." Genauso kam es: Den Scheidungskrieg verlor Tricia Walsh nicht trotz, sondern wegen des Videos. "Sie hat versucht, das Leben ihres Mannes in eine Seifenoper zu verwandeln, indem sie ein Melodram schrieb und spielte", schimpfte der Richter und bestätigte den klar zu Gunsten des Gatten ausfallenden Ehevertrag als rechtmäßig. Die Schauspielerin und Drehbuchautorin, die nie über Jobs für Mayonnaise-Werbung, billige Horrorfilme und die "Benny Hill Show" hinausgekommen war, flog aus der gemeinsamen Wohnung. Mit Fortsetzungen zum ersten Youtube-Film versucht sie sich nun im Gespräch zu halten - mit abnehmendem Erfolg. Das aktuelle Video, in dem sie ankündigt, die Kondome ihres Ex bei Ebay zu verkaufen, klickten nur noch 70000 Schadenfreudige an. Das Internet vergisst vielleicht nie, aber seine User umso schneller."
  20. Dodge, M.: What does the Internet look like, Jellyfish perhaps? : Exploring a visualization of the Internet by Young Hyun of CAIDA (2001) 0.00
    0.0013616894 = product of:
      0.0068084467 = sum of:
        0.0068084467 = product of:
          0.013616893 = sum of:
            0.013616893 = weight(_text_:web in 1554) [ClassicSimilarity], result of:
              0.013616893 = score(doc=1554,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.09014259 = fieldWeight in 1554, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1554)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Content
    What Is CAIDA? Association for Internet Data Analysis, started in 1997 and is based in the San Diego Supercomputer Center. CAIDA is led by KC Claffy along with a staff of serious Net techie researchers and grad students, and they are one of the worlds leading teams of academic researchers studying how the Internet works [6] . Their mission is "to provide a neutral framework for promoting greater cooperation in developing and deploying Internet measurement, analysis, and visualization tools that will support engineering and maintaining a robust, scaleable global Internet infrastructure." In addition to the Walrus visualization tool and the skitter monitoring system which we have touched on here, CAIDA has many other interesting projects mapping the infrastructure and operations of the global Internet. Two of my particular favorite visualization projects developed at CAIDA are MAPNET and Plankton [7] . MAPNET provides a useful interactive tool for mapping ISP backbones onto real-world geography. You can select from a range of commercial and research backbones and compare their topology of links overlaid on the same map. (The major problem with MAPNET is that is based on static database of ISP backbones links, which has unfortunately become obsolete over time.) Plankton, developed by CAIDA researchers Bradley Huffaker and Jaeyeon Jung, is an interactive tool for visualizing the topology and traffic on the global hierarchy of Web caches.

Years

Languages

  • e 518
  • d 181
  • a 8
  • el 3
  • f 2
  • i 2
  • es 1
  • nl 1
  • More… Less…

Types

  • a 307
  • i 17
  • r 16
  • s 16
  • x 14
  • n 13
  • m 10
  • p 5
  • b 2
  • More… Less…

Themes