Search (950 results, page 48 of 48)

  • × language_ss:"e"
  • × theme_ss:"Internet"
  1. Robbio, A. de; Maguolo, D.; Marini, A.: Scientific and general subject classifications in the digital world (2001) 0.00
    7.6892605E-4 = product of:
      0.0069203344 = sum of:
        0.0069203344 = product of:
          0.013840669 = sum of:
            0.013840669 = weight(_text_:web in 2) [ClassicSimilarity], result of:
              0.013840669 = score(doc=2,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.14422815 = fieldWeight in 2, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    In the present work we discuss opportunities, problems, tools and techniques encountered when interconnecting discipline-specific subject classifications, primarily organized as search devices in bibliographic databases, with general classifications originally devised for book shelving in public libraries. We first state the fundamental distinction between topical (or subject) classifications and object classifications. Then we trace the structural limitations that have constrained subject classifications since their library origins, and the devices that were used to overcome the gap with genuine knowledge representation. After recalling some general notions on structure, dynamics and interferences of subject classifications and of the objects they refer to, we sketch a synthetic overview on discipline-specific classifications in Mathematics, Computing and Physics, on one hand, and on general classifications on the other. In this setting we present The Scientific Classifications Page, which collects groups of Web pages produced by a pool of software tools for developing hypertextual presentations of single or paired subject classifications from sequential source files, as well as facilities for gathering information from KWIC lists of classification descriptions. Further we propose a concept-oriented methodology for interconnecting subject classifications, with the concrete support of a relational analysis of the whole Mathematics Subject Classification through its evolution since 1959. Finally, we recall a very basic method for interconnection provided by coreference in bibliographic records among index elements from different systems, and point out the advantages of establishing the conditions of a more widespread application of such a method. A part of these contents was presented under the title Mathematics Subject Classification and related Classifications in the Digital World at the Eighth International Conference Crimea 2001, "Libraries and Associations in the Transient World: New Technologies and New Forms of Cooperation", Sudak, Ukraine, June 9-17, 2001, in a special session on electronic libraries, electronic publishing and electronic information in science chaired by Bernd Wegner, Editor-in-Chief of Zentralblatt MATH.
  2. Lim, S.: How and why do college students use Wikipedia? (2009) 0.00
    7.6892605E-4 = product of:
      0.0069203344 = sum of:
        0.0069203344 = product of:
          0.013840669 = sum of:
            0.013840669 = weight(_text_:web in 3163) [ClassicSimilarity], result of:
              0.013840669 = score(doc=3163,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.14422815 = fieldWeight in 3163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3163)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    The purposes of this study were to explore college students' perceptions, uses of, and motivations for using Wikipedia, and to understand their information behavior concerning Wikipedia based on social cognitive theory (SCT). A Web survey was used to collect data in the spring of 2008. The study sample consisted of students from an introductory undergraduate course at a large public university in the midwestern United States. A total of 134 students participated in the study, resulting in a 32.8% response rate. The major findings of the study include the following: Approximately one-third of the students reported using Wikipedia for academic purposes. The students tended to use Wikipedia for quickly checking facts and finding background information. They had positive past experiences with Wikipedia; however, interestingly, their perceptions of its information quality were not correspondingly high. The level of their confidence in evaluating Wikipedia's information quality was, at most, moderate. Respondents' past experience with Wikipedia, their positive emotional state, their disposition to believe information in Wikipedia, and information utility were positively related to their outcome expectations of Wikipedia. However, among the factors affecting outcome expectations, only information utility and respondents' positive emotions toward Wikipedia were related to their use of it. Further, when all of the independent variables, including the mediator, outcome expectations, were considered, only the variable information utility was related to Wikipedia use, which may imply a limited applicability of SCT to understanding Wikipedia use. However, more empirical evidence is needed to determine the applicability of this theory to Wikipedia use. Finally, this study supports the knowledge value of Wikipedia (Fallis, [2008]), despite students' cautious attitudes toward Wikipedia. The study suggests that educators and librarians need to provide better guidelines for using Wikipedia, rather than prohibiting Wikipedia use altogether.
  3. Deussen, N.: Sogar der Mars könnte bald eine virutelle Heimat bekommen : Gut 4,2 Milliarden sind nicht genug: Die sechste Version des Internet-Protokolls schafft viele zusätzliche Online-Adressen (2001) 0.00
    6.7964103E-4 = product of:
      0.0061167693 = sum of:
        0.0061167693 = product of:
          0.012233539 = sum of:
            0.012233539 = weight(_text_:web in 5729) [ClassicSimilarity], result of:
              0.012233539 = score(doc=5729,freq=4.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.12748088 = fieldWeight in 5729, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=5729)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    In der Virtualität wird's eng. Die Möglichkeiten des Scheinbaren sind anscheinend ausgereizt. Es mangelt bald an InternetAdressen. Wenn WhirIpools und Wasclunaschinen ihren eigenen Zugang zum Internet brauchen, wird der Vorrat an Kennzahlen knapp. Um dem drohenden Mangel zu begegnen, wird seit Jahren an einer überarbeiteten Fassung des Internet-Protokolls (IP) gebastelt. Doch die Neuauflage hat bis auf ein paar Testläufe - bisher ihren Weg ins Netz noch nicht gefunden. Für Aufregung sorgte sie dennoch bereits: wegen Datenschutzproblemen. Für die Kommunikation zwischen Computern im Internet gibt es eine Art Knigge. Die protokollarische Vorschrift legt fest; wie die Rechner Daten untereinander austauschen. Doch zuvor brauchen die Maschinen Namen (wie www.fr-aktuell.de) und Anschriften (hier: 194.175.173.20), damit sie sich einander vorstellen (Shake Hands) und später Daten schicken können. Vergeben werden die Bezeichnungen von der Internet Corporation for Assigned Names and Numbers Icann). Den ersten Vorschlag für eine einheitliche Übergaberegelung machten Bob Kahn und Vint Cerf im Jahr 1974. Damals versuchten im inzwischen legendären, militärisch genutzten Arpanet kaum tausend Großrechner an etwa 250 Standorten miteinander zu kommunizieren. Um Ordnung in das Sprachengewirr der verschiedenen Bautypen zu bringen, mussten Regeln her. Die Idee entwickelte sich zum Protokoll, das nach Informatik-Manier mit dem Kürzel TCP/IP belegt wurde. Mit etwa 100000 angeschlossenen Computern wurde das Netz 1983 zivil - und TCP/IP zum offiziellen Standard. Derzeit regelt die vierte Version des Internet-Protokolls (IPv4) den Bit-Transport. Die Adresse wird jedem Datenpaket vorangestellt. Sie besteht aus Ziffern und ist exakt 32 Bit lang. Daraus ergeben sich mehr als 4,2 Milliarden Zahlenkombinationen. Genug für einen Globus, auf dem erst kürzlich der sechsmilliardste Erdenbürger das Licht der realen Welt erblickte - dachten die Computer-Operateure damals. Dann kam das World Wide Web.
    Der Geniestreich aus dem Europäischen Labor für Teilchenphysik (Cern) in Genf machte aus dem Wissenschaftsnetz ein Massenmedium. Zudem erfuhr die elektronische Post einen Aufschwung. Das Wachstum der Netze sprengt alle Erwartungen", resümiert Klaus Birkenbihl vom InformatikForschungszentrum GMI). Jede Web-Site, jede E-Mail-Box, jeder Computer, der per Standleitung online ist, braucht eine eindeutige Identifizierung. Die Schätzungen, wie viele IPv4-Adressen noch frei sind, schwanken zwischen 40 und zehn Prozent. Der Verbrauch jedenfalls steigt rasant: Die Anzahl der WebSites steuert derzeit auf eine Milliarde zu, weit mehr Netznummern gehen bereits für E-Mail-Anschriften drauf. Den Adressraum weiter ausschöpfen werden demnächst die intelligenten Haushaltsgeräte. Der Laden an der Ecke will wissen, welcher Kühlschrank die Milch bestellt hat, die Videozentrale braucht für das Überspielen des Films die Kennung des PC-Recorders, der Computer des Installateurs benötigt die IP-Anschrift der Heizungsanlage für die Fernwartung. Handys, die später Nachrichten übers Internet schicken, und Internet-Telefonie gehen möglicherweise leer aus. Doch bevor Internet-Adressen zur heiß begehrten Schieberware werden, soll ein neues Adresssystern mit mehr Möglichkeiten her. Schon 1990 hatte sich die Internet Engineering Task Force (IETF) Gedanken über einen neues Internet-Protokoll mit einem größeren Adressangebot gemacht. Im IETF kümmern sich Forscher, Soft- und HardwareIngenieure um die fortlaufende Verbesserung von Architektur und Arbeit des Netz werks. Eine ihrer Arbeitsgruppen prognostizierte, der IPv4-Vorrat gehe 2005 zu Ende. Fünf Jahre dauerte es, dann waren sich alle Internet-Gremien einig: Eine neue Protokollversion, IPv6, muss her. Dann passierte weiter nichts. Endlich verkündete 1999 Josh Elliot von der Icann, ab sofort würden neue Anschriften verteilt. Ein historischer Moment", freute er sich.
  4. Dron, J.; Boyne, C.; Mitchell, R.; Siviter, P.: Darwin among the indices : a report on COFIND, a self-organising resource base (2000) 0.00
    6.728103E-4 = product of:
      0.0060552927 = sum of:
        0.0060552927 = product of:
          0.012110585 = sum of:
            0.012110585 = weight(_text_:web in 106) [ClassicSimilarity], result of:
              0.012110585 = score(doc=106,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.12619963 = fieldWeight in 106, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=106)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    In this paper we report on the development and use of CoFIND (Collaborative Filter In N Dimensions), a web-based collaborative bookmark engine, designed as part of a self-organising learning environment to generate a list of useful and relevant learning resources. Users of CoFfND can add pointers to resources and rate them according to two types of category, 'topics' and 'qualities'. Along with the links and descriptions of the resources themselves, both topics and qualities are also entered by users, thus generating a resource-base and collective categorisation scheme based on the needs and wishes of its participants. A topic is analogous to a traditional category whereby any object can be considered to be in the set or out of it. Examples of topics might include 'animals', 'computing', 'travel' and so on. Qualities, on the other hand are the things that users value in a resource, and most of them are (in English at any rate) adjectives or adjectival descriptive phrases. It is always possible to say of a quality that a given resource is more or less so. Examples of qualities might include 'good for beginners', 'amusing', 'colourful', 'turgid' and so on. It is the qualities that provide the nth dimension of CoFIND, allowing much subtler ratings than typical collaborative filtering systems, which tend to rate resources according to a simple good/bad or useful/useless scale. CoFIND thus dynamically accommodates changing needs in learners, essential because the essence of learning is change. In use, the user enters a number of qualities and/or topics that interest them. Resources are returned in a list ordered according to the closeness of match to the required topics and qualities, weighted by the number of users who have categorised or rated a particular resource. The more a topic or quality is used to categorise different resources, the more prominent its position in the list of selectable topics or categories. Not only do less popular qualities sink to the bottom of this list, they can also fall off it altogether, in a process analogous to a Darwinian concept of evolution, where species of quality or topic fight each other for votes and space on the list and topics and qualities are honed so that only the most useful survive. The system is designed to teeter on the 'edge of chaos', thus allowing clear species to develop without falling into chaotic disorder or stagnant order. The paper reports on some ongoing experiments using the CoFIND system to support a number of learning environments within the University of Brighton. In particular, we report on a cut-down form used to help teach a course on Human-Computer Interaction, whereby students not only rate screen designs but collaboratively create the qualities used to rate those resources. Mention is made of plans to use the system to establish metadata schema for courseware component design, a picture database and to help facilitate small group research. The paper concludes by analysing early results, indicating that the approach provides a promising way to automatically elicit consensus on issues of categorisation and rating, allowing evolution instead of the 'experts' to decide classification criteria. However, several problems need to be overcome, including difficulties encouraging use of the system (especially when the resource base is not highly populated) and problems tuning the rate of evolution in order to maintain a balance between stability and disorder
  5. ¬The Internet in everyday life (2002) 0.00
    6.728103E-4 = product of:
      0.0060552927 = sum of:
        0.0060552927 = product of:
          0.012110585 = sum of:
            0.012110585 = weight(_text_:web in 2223) [ClassicSimilarity], result of:
              0.012110585 = score(doc=2223,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.12619963 = fieldWeight in 2223, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2223)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Footnote
    Rez. in JASIST 55(2004) no.1, S.278-279 (P.K. Nayar): "We live in an increasingly wired and digitized world. Work, leisure, shopping, research, and interpersonal communications are all mediated by the new technologies. The present volume begins with the assumption that the Internet is not a special system, it is routinely incorporated into the everyday. Wellman and Haythornthwaite note that increasing access and commitment (doing more types of things online), domestication (online access from home), and longer work hours (working from anywhere, including home) are trends in everyday Internet use. In their elaborate introduction to the volume, Wellman and Haythornthwaite explore the varied dimensions of these trends in terms of the digital divide, the demographic issues of Internet use and online behavior (that is, social interaction). This sets the tone for the subsequent essays, most of which are voyages of discovery, seeking patterns of use and behavior. The focus of individual essays is dual: empirical study/data and theoretical conclusions that range from the oracular to the commentary. Readers will find this approach useful because the conclusions drawn are easily verified against statistics (a major part of the volume is comprised of tables and databases). It is also consciously tilted at the developed countries where Internet use is extensive. However, the effort at incorporating data from ethnic communities within developed nations, Japan and India, renders the volume more comprehensive. Some gaps are inevitable in any volume that seeks to survey anything as vast as the role of the Internet in everyday life. There is almost no discussion of subcultural forms that have mushroomed within and because of cyberspace. Now technology, we know, breeds its own brand of discontent. Surely a discussion of hackers, who, as Douglas Thomas has so clearly demonstrated in his book Hacker Culture (2002), see themselves as resisting the new "culture of secrecy" of corporate and political mainstream culture, is relevant to the book's ideas? If the Internet stands for a whole new mode of community building, it also stands for increased surveillance (particularly in the wake of 9/11). Under these circumstances, the use of Computer-mediated communication to empower subversion or to control it assumes enormous politicoeconomic significance. And individual Internet users come into this an an everyday basis, as exemplified by the American housewives who insinuate themselves into terrorist web/chat spaces as sympathizers and Crack their identities for the FBI, CIA, and other assorted agencies to follow up on. One more area that could have done with some more survey and study is the rise of a new techno-elite. Techno-elitism, as symbolized images of the high-power "wired" executive, eventually becomes mainstream culture. Those who control the technology also increasingly control the information banks. The studies in the present volume explore age differentials and class distinctions in the demography of Internet users, but neglect to account for the specific levels of corporate/scientific/political hierarchy occupied by the techno-savvy. R.L. Rutsky's High Techne (1999) has demonstrated how any group-hackers, corporate heads, software engineers-with a high level of technological expertise modulate into icons of achievement. Tim Jordan in his Cyberpower (1999) and Chris Hables Gray in Cyborg Citizen (2001) also emphasize the link between technological expertise, the rise of a techno-elite, and "Cyberpower." However, it would be boorish, perhaps, to point out such lapses in an excellent volume. The Internet in Everyday Life will be useful to students of cultural, communication, and development studies, cyberculture and social studies of technology."
  6. Clyde, L.A.: Weblogs and libraries (2004) 0.00
    6.728103E-4 = product of:
      0.0060552927 = sum of:
        0.0060552927 = product of:
          0.012110585 = sum of:
            0.012110585 = weight(_text_:web in 4496) [ClassicSimilarity], result of:
              0.012110585 = score(doc=4496,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.12619963 = fieldWeight in 4496, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4496)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Footnote
    Rez. in: B.I.T. online 8(2005) H.2, S.202 (J. Plieninger): "Weblogs oder Blogs, zu deutsch: Netztagebücher, machen seit einigen Jahren als neue Kommunikationsform im World Wide Web Furore. Waren es zunächst einzelne Menschen, welche über Weblogs Informationen und Meinungen transportierten, so entwickeln sich Weblogs zunehmend zu Medien, durch welche auch von Institutionen Marketinginformationen an Nutzer/Kunden verteilt werden. Freilich ist dabei zu beachten, dass es sich bei Weblogs nicht unbedingt um ein Ein-WegMedium handelt: Indem die Nutzer oft vom Betreiber des Weblogs die Möglichkeit bekommen, Kommentare abzugeben, bekommt ein Weblog so einen Forencharakter, indem die angebotene Information diskutiert wird bzw. werden kann. Wenn man sich der Dienstleistungen seiner Institution sicher ist und die Außendarstellung souverän zu handhaben vermag, kann man also mittels eines Weblogs Inhalte gut transportieren und Rückmeldungen direkt entgegennehmen. Falls nicht, kann man auf die Kommentarfunktion auch verzichten. Wer sich überlegt, eventuell ein Weblog als weiteres Marketinginstrument und zur Hebung des Images der Bibliothek einzuführen, der bekommt mit diesem Werk eine umfassende Einführung. Die Autorin ist Professorin an einer bibliothekarischen Ausbildungsstätte in, Island und gibt hier einen Überblick über Weblogs im allgemeinen und ihren Einsatz im bibliothekarischen Feld im besonderen. Nach einem Überblick über die Weblogs als neues Phänomen des Internets bietet sie eine Einschätzung von Blogs als Informationsquellen, schildert danach die Suche nach Weblogs bzw. nach Inhalten von Weblogs. Sodann behandelt sie Weblogs in der Bibliotheks- und Informationswissenschaft und geht weiter auf Weblogs ein, die von Bibliotheken erstellt werden. Danach kommt der praktische Teil: Wie man ein Weblog einrichtet und - meiner Meinung nach das wichtigste Kapitel - wie man es managt. Am Schluss gibt sie Auskunft über Quellen zu Informationen über Blogs. Ein Stichwortregister schließt den Band ab.
  7. XML data management : native XML and XML-enabled database systems (2003) 0.00
    5.437129E-4 = product of:
      0.0048934156 = sum of:
        0.0048934156 = product of:
          0.009786831 = sum of:
            0.009786831 = weight(_text_:web in 2073) [ClassicSimilarity], result of:
              0.009786831 = score(doc=2073,freq=4.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.1019847 = fieldWeight in 2073, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=2073)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Footnote
    Rez. in: JASIST 55(2004) no.1, S.90-91 (N. Rhodes): "The recent near-exponential increase in XML-based technologies has exposed a gap between these technologies and those that are concerned with more fundamental data management issues. This very comprehensive and well-organized book has quite neatly filled the gap, thus achieving most of its stated intentions. The target audiences are database and XML professionals wishing to combine XML with modern database technologies and such is the breadth of scope of this book (hat few would not find it useful in some way. The editors have assembled a collection of chapters from a wide selection of industry heavyweights and as with most books of this type, it exhibits many disparate styles but thanks to careful editing it reads well as a cohesive whole. Certain sections have already appeared in print elsewhere and there is a deal of corporate flag-waving but nowhere does it become over-intrusive. The preface provides only the very brietest of introductions to XML but instead sets the tone for the remainder of the book. The twin terms of data- and document-centric XML (Bourret, 2003) that have achieved so much recent currency are re-iterated before XML data management issues are considered. lt is here that the book's aims are stated, mostly concerned with the approaches and features of the various available XML data management solutions. Not surprisingly, in a specialized book such as this one an introduction to XML consists of a single chapter. For issues such as syntax, DTDs and XML Schemas the reader is referred elsewhere, here, Chris Brandin provides a practical guide to achieving good grammar and style and argues convincingly for the use of XML as an information-modeling tool. Using a well-chosen and simple example, a practical guide to modeling information is developed, replete with examples of the pitfalls. This brief but illuminating chapter (incidentally available as a "taster" from the publisher's web site) notes that one of the most promising aspects of XML is that applications can be built to use a single mutable information model, obviating the need to change the application code but that good XML design is the basis of such mutability.
    After several detailed examples of XML, Direen and Jones discuss sequence comparisons. The ability to create scored comparisons by such techniques as sequence alignment is fundamental to bioinformatics. For example, the function of a gene product may be inferred from similarity with a gene of known function but originating from a different organism and any information modeling method must facilitate such comparisons. One such comparison tool, BLAST utilizes a heuristic method has become the tool of choice for many years and is integrated into the NeoCore XMS (XML Management System) described herein. Any set of sequences that can be identified using an XPath query may thus become the targets of an embedded search. Again examples are given, though a BLASTp (protein) search is labeled as being BLASTn (nucleotide sequence) in one of them. Some variants of BLAST are computationally intensive, e.g., tBLASTx where a nucleotide sequence is dynamically translated in all six reading frames and compared against similarly translated database sequences. Though these variants are implemented in NeoCore XMS, it would be interesting to see runtimes for such comparisons. Obviously the utility of this and the other four quite specific examples will depend an your interest in the application area but two that are more research-oriented and general follow them. These chapters (on using XML with inductive databases and an XML warehouses) are both readable critical reviews of their respective subject areas. For those involved in the implementation of performance-critical applications an examination of benchmark results is mandatory, however very few would examine the benchmark tests themselves. The picture that emerges from this section is that no single set is comprehensive and that some functionalities are not addressed by any available benchmark. As always, there is no Substitute for an intimate knowledge of your data and how it is used. In a direct comparison of an XML-enabled and a native XML database system (unfortunately neither is named), the authors conclude that though the native system has the edge in handling large documents this comes at the expense of increasing index and data file size. The need to use legacy data and software will certainly favor the all-pervasive XML-enabled RDBMS such as Oracle 9i and IBM's DB2. Of more general utility is the chapter by Schmauch and Fellhauer comparing the approaches used by database systems for the storing of XML documents. Many of the limitations of current XML-handling systems may be traced to problems caused by the semi-structured nature of the documents and while the authors have no panacea, the chapter forms a useful discussion of the issues and even raises the ugly prospect that a return to the drawing board may be unavoidable. The book concludes with an appraisal of the current status of XML by the editors that perhaps focuses a little too little an the database side but overall I believe this book to be very useful indeed. Some of the indexing is a little idiosyncratic, for example some tags used in the examples are indexed (perhaps a separate examples index would be better) and Ron Bourret's excellent web site might be better placed under "Bourret" rather than under "Ron" but this doesn't really detract from the book's qualities. The broad spectrum and careful balance of theory and practice is a combination that both database and XML professionals will find valuable."
  8. Dodge, M.: What does the Internet look like, Jellyfish perhaps? : Exploring a visualization of the Internet by Young Hyun of CAIDA (2001) 0.00
    4.805788E-4 = product of:
      0.004325209 = sum of:
        0.004325209 = product of:
          0.008650418 = sum of:
            0.008650418 = weight(_text_:web in 1554) [ClassicSimilarity], result of:
              0.008650418 = score(doc=1554,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.09014259 = fieldWeight in 1554, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1554)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Content
    What Is CAIDA? Association for Internet Data Analysis, started in 1997 and is based in the San Diego Supercomputer Center. CAIDA is led by KC Claffy along with a staff of serious Net techie researchers and grad students, and they are one of the worlds leading teams of academic researchers studying how the Internet works [6] . Their mission is "to provide a neutral framework for promoting greater cooperation in developing and deploying Internet measurement, analysis, and visualization tools that will support engineering and maintaining a robust, scaleable global Internet infrastructure." In addition to the Walrus visualization tool and the skitter monitoring system which we have touched on here, CAIDA has many other interesting projects mapping the infrastructure and operations of the global Internet. Two of my particular favorite visualization projects developed at CAIDA are MAPNET and Plankton [7] . MAPNET provides a useful interactive tool for mapping ISP backbones onto real-world geography. You can select from a range of commercial and research backbones and compare their topology of links overlaid on the same map. (The major problem with MAPNET is that is based on static database of ISP backbones links, which has unfortunately become obsolete over time.) Plankton, developed by CAIDA researchers Bradley Huffaker and Jaeyeon Jung, is an interactive tool for visualizing the topology and traffic on the global hierarchy of Web caches.
  9. Ding, J.: Can data die? : why one of the Internet's oldest images lives on wirhout its subjects's consent (2021) 0.00
    4.805788E-4 = product of:
      0.004325209 = sum of:
        0.004325209 = product of:
          0.008650418 = sum of:
            0.008650418 = weight(_text_:web in 423) [ClassicSimilarity], result of:
              0.008650418 = score(doc=423,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.09014259 = fieldWeight in 423, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=423)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Abstract
    Lena Forsén, the real human behind the Lenna image, was first published in Playboy in 1972. Soon after, USC engineers searching for a suitable test image for their image processing research sought inspiration from the magazine. They deemed Lenna the right fit and scanned the image into digital, RGB existence. From here, the story of the image follows the story of the internet. Lenna was one of the first inhabitants of ARPANet, the internet's predecessor, and then the world wide web. While the image's reach was limited to a few research papers in the '70s and '80s, in 1991, Lenna was featured on the cover of an engineering journal alongside another popular test image, Peppers. This caught the attention of Playboy, which threatened a copyright infringement lawsuit. Engineers who had grown attached to Lenna fought back. Ultimately, they prevailed, and as a Playboy VP reflected on the drama: "We decided we should exploit this because it is a phenomenon." The Playboy controversy canonized Lenna in engineering folklore and prompted an explosion of conversation about the image. Image hits on the internet rose to a peak number in 1995.
  10. Mossberger, K.; Tolbert, C.J.; Stansbury, M.: Virtual inequality : beyond the digital divide (2003) 0.00
    3.8446303E-4 = product of:
      0.0034601672 = sum of:
        0.0034601672 = product of:
          0.0069203344 = sum of:
            0.0069203344 = weight(_text_:web in 1795) [ClassicSimilarity], result of:
              0.0069203344 = score(doc=1795,freq=2.0), product of:
                0.09596372 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02940506 = queryNorm
                0.07211407 = fieldWeight in 1795, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1795)
          0.5 = coord(1/2)
      0.11111111 = coord(1/9)
    
    Footnote
    The economic opportunity divide is predicated an the hypothesis that there has, indeed, been a major shift in opportunities driven by changes in the information environment. The authors document this paradigm shift well with arguments from the political and economic right and left. This chapter might be described as an "attitudinal" chapter. The authors are concerned here with the perceptions of their respondents of their information skills and skill levels with their economic outlook and opportunities. Technological skills and economic opportunities are correlated, one finds, in the minds of all across all ages, genders, races, ethnicities, and income levels. African Americans in particular are ". . attuned to the use of technology for economic opportunity" (p. 80). The fourth divide is the democratic divide. The Internet may increase political participation, the authors posit, but only among groups predisposed to participate and perhaps among those with the skills necessary to take advantage of the electronic environment (p. 86). Certainly the Web has played an important role in disseminating and distributing political messages and in some cases in political fund raising. But by the analysis here, we must conclude that the message does not reach everyone equally. Thus, the Internet may widen the political participation gap rather than narrow it. The book has one major, perhaps fatal, flaw: its methodology and statistical application. The book draws upon a survey performed for the authors in June and July 2001 by the Kent State University's Computer Assisted Telephone Interviewing (CATI) lab (pp. 7-9). CATI employed a survey protocol provided to the reader as Appendix 2. An examination of the questionnaire reveals that all questions yield either nominal or ordinal responses, including the income variable (pp. 9-10). Nevertheless, Mossberger, Tolbert, and Stansbury performed a series of multiple regression analyses (reported in a series of tables in Appendix 1) utilizing these data. Regression analysis requires interval/ratio data in order to be valid although nominal and ordinal data can be incorporated by building dichotomous dummy variables. Perhaps Mossberger, Tolbert, and Stansbury utilized dummy variables; but 1 do not find that discussed. Moreover, 1 would question a multiple regression made up completely of dichotomous dummy variables. I come away from Virtual Inequality with mixed feelings. It is useful to think of the digital divide as more than one phenomenon. The four divides that Mossberger, Tolbert, and Stansbury offeraccess, skills, economic opportunity, and democratic-are useful as a point of departure and debate. No doubt, other divides will be identified and documented. This book will lead the way. Second, without question, Mossberger, Tolbert, and Stansbury provide us with an extremely well-documented, -written, and -argued work. Third, the authors are to be commended for the multidisciplinarity of their work. Would that we could see more like it. My reservations about their methodological approach, however, hang over this review like a shroud."

Years

Types

  • a 807
  • m 96
  • s 46
  • el 30
  • b 2
  • r 2
  • i 1
  • More… Less…

Subjects

Classifications