Search (411 results, page 21 of 21)

  • × language_ss:"e"
  • × theme_ss:"Internet"
  • × year_i:[2000 TO 2010}
  1. Devadason, F.J.; Intaraksa, N.; Patamawongjariya, P.; Desai, K.: Faceted indexing application for organizing and accessing internet resources (2003) 0.00
    0.0011898145 = product of:
      0.004759258 = sum of:
        0.004759258 = weight(_text_:information in 3966) [ClassicSimilarity], result of:
          0.004759258 = score(doc=3966,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0775819 = fieldWeight in 3966, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=3966)
      0.25 = coord(1/4)
    
    Source
    Subject retrieval in a networked environment: Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC. Ed.: I.C. McIlwaine
  2. Ford, N.; Mansourian, Y.: ¬The invisible web : an empirical study of "cognitive invisibility" (2006) 0.00
    0.0011898145 = product of:
      0.004759258 = sum of:
        0.004759258 = weight(_text_:information in 608) [ClassicSimilarity], result of:
          0.004759258 = score(doc=608,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0775819 = fieldWeight in 608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=608)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - The purpose of this paper is to report an empirical investigation into conceptions of the "invisible web". Design/methodology/approach - This was an exploratory qualitative study based on in-depth semi-structured interviews with 15 members of academic staff from three biology-related departments at the University of Sheffield. Concepts emerged from an inductive analysis of the interview data to form a tentative model. Findings - A distinction is drawn between technical objective conceptions of the "invisible web" that commonly appear in the literature, and a cognitive subjective conception based on searchers' perceptions of search failure, and a tentative model of "cognitive invisibility" is presented. The relationship between objective and subjective conceptions, and implications for training, are discussed. Research limitations/implications - The research was qualitative and exploratory, designed to elicit sensitising concepts and to "map the territory". It thus aims to provide a tentative model that could form the basis for more systematic study. Such research could investigate the validity of the categories in different and/or larger samples, seek further to illuminate, challenge, extend or refute the model, and address issues of generalisability. Practical implications - The paper presents a conceptual model that is intended to be a useful reference point for researchers wishing to investigate user-based aspects of search failure and the invisible web. It may also be useful to trainers and those interested in developing information literacy, in that it differentiates technical objective and cognitive subjective conceptions of "invisibility, and discusses the implications for helping searchers develop more effective searching capabilities. Originality/value - The paper offers an alternative cognitive subjective view of "web invisibility" to that more commonly presented in the literature. It contributes to a still small body of empirical research into user-based aspects of the invisible web.
  3. Veelen, I. van: ¬The truth according to Wikipedia (2008) 0.00
    0.0011898145 = product of:
      0.004759258 = sum of:
        0.004759258 = weight(_text_:information in 2139) [ClassicSimilarity], result of:
          0.004759258 = score(doc=2139,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0775819 = fieldWeight in 2139, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2139)
      0.25 = coord(1/4)
    
    Abstract
    Google or Wikipedia? Those of us who search online -- and who doesn't? -- are getting referred more and more to Wikipedia. For the past two years, this free online "encyclopedia of the people" has been topping the lists of the world's most popular websites. But do we really know what we're using? Backlight plunges into the story behind Wikipedia and explores the wonderful world of Web 2.0. Is it a revolution, or pure hype? Director IJsbrand van Veelen goes looking for the truth behind Wikipedia. Only five people are employed by the company, and all its activities are financed by donations and subsidies. The online encyclopedia that everyone can contribute to and revise is now even bigger than the illustrious Encyclopedia Britannica. Does this spell the end for traditional institutions of knowledge such as Britannica? And should we applaud this development as progress or mourn it as a loss? How reliable is Wikipedia? Do "the people" really hold the lease on wisdom? And since when do we believe that information should be free for all? In this film, "Wikipedians," the folks who spend their days writing and editing articles, explain how the online encyclopedia works. In addition, the parties involved discuss Wikipedia's ethics and quality of content. It quickly becomes clear that there are camps of both believers and critics. Wiki's Truth introduces us to the main players in the debate: Jimmy Wales (founder and head Wikipedian), Larry Sanger (co-founder of Wikipedia, now head of Wiki spin-off Citizendium), Andrew Keen (author of The Cult of the Amateur: How Today's Internet Is Killing Our Culture and Assaulting Our Economy), Phoebe Ayers (a Wikipedian in California), Ndesanjo Macha (Swahili Wikipedia, digital activist), Tim O'Reilly (CEO of O'Reilly Media, the "inventor" of Web 2.0), Charles Leadbeater (philosopher and author of We Think, about crowdsourcing), and Robert McHenry (former editor-in-chief of Encyclopedia Britannica). Opening is a video by Chris Pirillo. The questions surrounding Wikipedia lead to a bigger discussion of Web 2.0, a phenomenon in which the user determines the content. Examples include YouTube, MySpace, Facebook, and Wikipedia. These sites would appear to provide new freedom and opportunities for undiscovered talent and unheard voices, but just where does the boundary lie between expert and amateur? Who will survive according to the laws of this new "digital Darwinism"? Are equality and truth really reconcilable ideals? And most importantly, has the Internet brought us wisdom and truth, or is it high time for a cultural counterrevolution?
  4. Hunt, R.: Civilisation and its disconnects (2008) 0.00
    0.0011898145 = product of:
      0.004759258 = sum of:
        0.004759258 = weight(_text_:information in 2568) [ClassicSimilarity], result of:
          0.004759258 = score(doc=2568,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0775819 = fieldWeight in 2568, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2568)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper aims to explore some initial and necessarily broad ideas about the effects of the world wide web on our methods of understanding and trusting, online and off. Design/methodology/approach - The paper considers the idea of trust via some of the revolutionary meanings inherent in the world wide web at its public conception in 1994, and some of its different meanings now. It does so in the context of the collaborative reader-writer Web2.0 (of today), and also through a brief exploration of our relationship to the grand narratives (and some histories) of the post-war West. It uses a variety of formal approaches taken from information science, literary criticism, philosophy, history, and journalism studies - together with some practical analysis based on 15 years as a web practitioner and content creator. It is a starting point. Findings - This paper suggests that a pronounced effect of the world wide web is the further atomising of many once-shared Western post-war narratives, and the global democratising of doubt as a powerful though not necessarily helpful epistemological tool. The world wide web is the place that most actively demonstrates contemporary doubt. Research limitations/implications - This is the starting place for a piece of larger cross-faculty (and cross-platform) research into the arena of trust and doubt. In particular, the relationship of concepts such as news, event, history and myth with the myriad content platforms of new media, the idea of the digital consumer, and the impact of geography on knowledge that is enshrined in the virtual. This paper attempts to frame a few of the initial issues inherent in the idea of "trust" in the digital age and argues that without some kind of shared aesthetics of narrative judgment brought about through a far broader public understanding of (rather than an interpretation of) oral, visual, literary and multi-media narratives, stories and plots, we cannot be said to trust many types of knowledge - not just in philosophical terms but also in our daily actions and behaviours. Originality/value - This paper initiates debate about whether the creation of a new academic "space" in which cross-faculty collaborations into the nature of modern narrative (in terms of production and consumption; producers and consumers) might be able to help us to understand more of the social implications of the collaborative content produced for consumption on the world wide web.
  5. Lipow, A.G.: ¬The virtual reference librarian's handbook (2003) 0.00
    0.0010516574 = product of:
      0.0042066295 = sum of:
        0.0042066295 = weight(_text_:information in 3992) [ClassicSimilarity], result of:
          0.0042066295 = score(doc=3992,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.068573356 = fieldWeight in 3992, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3992)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: B.I.T. online 6(2003) H.3, S.298-299 (J. Plieninger): "Wer im vorigen Heft von B.I.T.online den Fachbeitrag von Hermann Rösch über Bibliothekarische Auskunft im Web gelesen und sich daraufhin überlegt, einen solchen Dienst einzuführen, für den wäre dieses Buch das geeignete Mittel, sich für die Einführung einer Online-Auskunft fit zu machen. Die Autorin ist in der amerikanischen Internet Librarian- und Reference Librarian-Szene wohlbekannt: 1993 verfasste sie mit zwei Mitautoren Crossing the Internet Treshold, ein Tutorial für die Nutzung des Netzes, welches für die Profession eine Hilfestellung für die breite Nutzung des Internets bot. Das hier besprochene Buch könnte eine ähnliche Funktion für die Einführung der Virtual Reference bekommen: Es bietet einen Selbstlernkurs, welcher anschaulich die Grundlagen und die Grundhaltung bei der Implementation eines solchen Dienstes vermittelt. Was ist alles in diesem Kurs enthalten? Der erste Teil des Buches behandelt den Entscheidungsprozess, einen Online-Auskunftsdienst einzuführen: Es werden Vor- und Nachteile diskutiert, die Bedürfnisse der Benutzer untersucht ("There will always be a need for a human consultant to satisfy the needs of the information seeker.") und die Grundlagen der Entscheidungsfindung für eine geeignete Software behandelt. Der zweite Teil handelt dann von den Fragen der "Einrichtung" des virtuellen Auskunftsplatzes. Hier gibt es z.B. eine Schulung in den besonderen Kommunikationsformen, welche beim Chat zu beachten sind, eine Einbettung des neuen Dienstes in das Leitbild, die Geschäftsordnung bzw. Arbeitsorganisation der Bibliothek ("library policies") und zuletzt die komfortable Ausstattung des Auskunftsplatzes für Benutzer und Beschäftigte bis hin zu Fragen der Evaluation und Qualitätssicherung. Der dritte Teil behandelt die Aufgabe, einen Dienst zu implementieren, der sich selbst trägt, indem man ein Marketing für den neuen Dienst einrichtet, das ihn auf herkömmlichen und neuen Wegen promotet und ihn benutzerfreundlich ausgestaltet.
    Im umfangreichen Anhang (44 S.) sind Checklisten, Übungen und Schulungsunterlagen vor allem zur richtigen Kommunikation mit den Benutzern zu finden. Am Schluss des Buches befindet sich noch ein Stichwortverzeichnis. Beigelegt ist eine CD-ROM mit allen im Buch aufgeführten Übungen und Links, so dass man auch am Bildschirm darauf zurückgreifen bzw. sie ausdrucken kann. Hervorzuheben ist, dass das Buch als Arbeitsbuch ausgestattet ist, es gibt viel Raum für Notizen, es werden viele anschauliche Beispiele gegeben und zu jedem Kapitel werden mehrere Übungsaufgaben gestellt. Es ist ein typisches amerikanisches Einführungsbuch, das in beneidenswert anschaulicher und konsequent praktisch orientierter Art die Leserin/den Leser in ein neues Arbeitsfeld einführt, so dass man nach der Lektüre wirklich den Eindruck hat, in Stand gesetzt zu sein, einen solchen Service in professioneller Art und Weise aufbauen zu können. Vielleicht sollte noch hervorgehoben werden, dass die Autorin es verstanden hat, den Inhalt so zu gestalten, dass er ein längeres Haltbarkeitsdatum bietet: Obwohl alle grundsätzlichen Dinge abgehandelt werden, wie z.B. die Entscheidungsgrundlagen für eine Software, wird doch nie eine konkrete Software behandelt. Solche Angaben würden schnell veralten im Gegensatz zu den Kriterien, die Software beurteilen zu können. Die Autorin bemüht sich auch, Internet-Quellen aufzuführen und zu besprechen, wo man sich in diesen Fragen up to date halten kann. Ein Buch, das in die Hände all jener gehört, für welche die Einführung einer Online-Auskunft in Frage kommt. Hermann Rösch führte in seinem Artikel zum Schluss lediglich einige Universitätsbibliotheken an, welche bereits eine Online-Auskunft eingeführt haben. Werden die öffentlichen Bibliotheken einen solchen Dienst nur in kooperativer Art und Weise über die Deutsche Internetbibliothek anbieten? Hoffentlich nicht, da die Einrichtung eines Virtual Reference Desk eine hervorragende Gelegenheit darstellt, das Image der Bibliothek als Informationsvermittlungsstelle nachhaltig zu stärken und jenen Benutzern einen Zugang zur Information zu ermöglichen, welche nicht in die Bibliothek kommen. Jedenfalls gibt dieses Buch die Grundlage, das Für und Wider eines solchen Dienstes abzuwägen und im Falle einer Einrichtung auch die Schulung der betroffenen Auskunftsbibliothekarinnen/-bibliothekare auf eine solide Basis zu stellen."
  6. Barabási, A.-L.: Linked: The New Science of Networks (2002) 0.00
    0.0010410878 = product of:
      0.004164351 = sum of:
        0.004164351 = weight(_text_:information in 2015) [ClassicSimilarity], result of:
          0.004164351 = score(doc=2015,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.06788416 = fieldWeight in 2015, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2015)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: nfd 54(2003) H.8, S.497 (T. Mandl): "Gesetze der digitalen Anarchie - Hyperlinks im Internet entstehen als Ergebnis sozialer Prozesse und können auch als formaler Graph im Sinne der Mathematik interpretiert werden. Die Thematik Hyperlinks ist im Information Retrieval höchst aktuell, da Suchmaschinen die Link-Struktur bei der Berechnung ihrer Ergebnisse berücksichtigen. Algorithmen zur Bestimmung des "guten Rufs" einer Seite wie etwa PageRank von Google gewichten eine Seite höher, wenn viele links auf sie verweisen. Barabási erklärt dem Leser seines Buches darüber hinaus noch, wie es zu solchen Phänomenen kommt. Soziale Prozesse im Netz wirken so, dass bereits bekannte Seiten mit größerer Wahrscheinlichkeit auch wieder weitere Links oder neue Besucher anziehen. Barabási ist Physiker an der Notre-Dame University und ihm fehlt ebenso wie Huberman die informationswissenschaftliche Perspektive. Er fragt also kaum, wie kann das Wissen über Netzwerke zu Verbesserungen in Informationssystemen führen, die Benutzerbedürfnisse besser erfüllen. Gleichwohl lohnt sich die Lektüre auch für Informationswissenschaftler. Barabäsi stellt die aktuelle Forschung zur Netzwerkstruktur des Internets auf einfache Weise fast ohne Zugeständnisse an Aktualität und Komplexität dar. Wie Huberman verzichtet auch er weitgehend auf Formeln und andere Formalismen. Der in Ungarn geborene Barabási lässt darüber hinaus keine Anekdote aus, sei es über die Begründer der Graphen-Theorie, im peer-review abgelehnte Beiträge und persönliche Begegnungen mit anderen Forschern. Barabási beginnt mit einfachen Netzwerkstrukturen und schreitet didaktisch über internet-ähnliche Netzwerke weiter zu Anwendungen und praktischen Beispielen aus unterschiedlichsten Disziplinen. Er schafft mit seinem Buch "Linked" unter anderem Links zwischen der ungarischen Literatur, dem I-Love-You Computer-Virus, der Verbreitung von Aids, den Theorien Einsteins, den Aufsichtsräten der wichtigsten amerikanischen Firmen, dem Al-Qaeda-Netzwerk und der Struktur und der Funktion biologischer Zellen. Zu Beginn seines Buches berichtet Barabási von sogenannten kleinen Welten, in denen viele Objekte über wenige Verbindungen zusammenhängen. Ein Blick in den eigenen größeren Bekanntenkreis mag bestätigen, dass viele Menschen über wenige Schritte zwischen Bekannten erreichbar sind. Sowohl Barabäsi als auch Huberman gehen auf die Geschichte des ersten sozialwissenschaftlichen Experiments zu diesem Thema ein, das in den 1960er Jahren versuchte, die Anzahl von Schritten zwischen gemeinsamen Bekannten zu bestimmen, welche vom Mittleren Westen der USA an die Ostküste führt. Die genauere Struktur solcher Systeme, in denen manche Knoten weitaus mehr Beziehungen zu anderen eingehen als der Durchschnitt, führt hin zum Internet. Im Web lässt sich keineswegs immer ein Pfad zwischen zwei Knoten finden, wie noch vor wenigen Jahren vermutet wurde. Die durchschnittliche Entfernung war damals noch mit 19 Klicks berechnet worden. Vielmehr herrscht eine differenziertere Struktur, die Barabási vorstellt und in der zahlreiche Seiten in Sackgassen führen. Huberman wie Barabási diskutieren auch negative Aspekte des Internet. Während Huberman die Wartezeiten und Staus bei Downloads analysiert, bespricht Barabási die rasante Verbreitung von ComputerViren und weist auf die Grundlagen für diese Gefährdung hin. Das vorletzte Kapitel widmen übrigens beide Autoren den Märkten im Internet. Spätestens hier werden die wirtschaftlichen Aspekte von Netzwerken deutlich. Beide Titel führen den Leser in die neue Forschung zur Struktur des Internet als Netzwerk und sind leicht lesbar. Beides sind wissenschaftliche Bücher, wenden sich aber auch an den interessierten Laien. Das Buch von Barabási ist etwas aktueller, plauderhafter, länger, umfassender und etwas populärwissenschaftlicher."
  7. ¬The Internet in everyday life (2002) 0.00
    0.0010410878 = product of:
      0.004164351 = sum of:
        0.004164351 = weight(_text_:information in 2223) [ClassicSimilarity], result of:
          0.004164351 = score(doc=2223,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.06788416 = fieldWeight in 2223, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2223)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in JASIST 55(2004) no.1, S.278-279 (P.K. Nayar): "We live in an increasingly wired and digitized world. Work, leisure, shopping, research, and interpersonal communications are all mediated by the new technologies. The present volume begins with the assumption that the Internet is not a special system, it is routinely incorporated into the everyday. Wellman and Haythornthwaite note that increasing access and commitment (doing more types of things online), domestication (online access from home), and longer work hours (working from anywhere, including home) are trends in everyday Internet use. In their elaborate introduction to the volume, Wellman and Haythornthwaite explore the varied dimensions of these trends in terms of the digital divide, the demographic issues of Internet use and online behavior (that is, social interaction). This sets the tone for the subsequent essays, most of which are voyages of discovery, seeking patterns of use and behavior. The focus of individual essays is dual: empirical study/data and theoretical conclusions that range from the oracular to the commentary. Readers will find this approach useful because the conclusions drawn are easily verified against statistics (a major part of the volume is comprised of tables and databases). It is also consciously tilted at the developed countries where Internet use is extensive. However, the effort at incorporating data from ethnic communities within developed nations, Japan and India, renders the volume more comprehensive. Some gaps are inevitable in any volume that seeks to survey anything as vast as the role of the Internet in everyday life. There is almost no discussion of subcultural forms that have mushroomed within and because of cyberspace. Now technology, we know, breeds its own brand of discontent. Surely a discussion of hackers, who, as Douglas Thomas has so clearly demonstrated in his book Hacker Culture (2002), see themselves as resisting the new "culture of secrecy" of corporate and political mainstream culture, is relevant to the book's ideas? If the Internet stands for a whole new mode of community building, it also stands for increased surveillance (particularly in the wake of 9/11). Under these circumstances, the use of Computer-mediated communication to empower subversion or to control it assumes enormous politicoeconomic significance. And individual Internet users come into this an an everyday basis, as exemplified by the American housewives who insinuate themselves into terrorist web/chat spaces as sympathizers and Crack their identities for the FBI, CIA, and other assorted agencies to follow up on. One more area that could have done with some more survey and study is the rise of a new techno-elite. Techno-elitism, as symbolized images of the high-power "wired" executive, eventually becomes mainstream culture. Those who control the technology also increasingly control the information banks. The studies in the present volume explore age differentials and class distinctions in the demography of Internet users, but neglect to account for the specific levels of corporate/scientific/political hierarchy occupied by the techno-savvy. R.L. Rutsky's High Techne (1999) has demonstrated how any group-hackers, corporate heads, software engineers-with a high level of technological expertise modulate into icons of achievement. Tim Jordan in his Cyberpower (1999) and Chris Hables Gray in Cyborg Citizen (2001) also emphasize the link between technological expertise, the rise of a techno-elite, and "Cyberpower." However, it would be boorish, perhaps, to point out such lapses in an excellent volume. The Internet in Everyday Life will be useful to students of cultural, communication, and development studies, cyberculture and social studies of technology."
  8. Waesche, N.M.: Internet entrepreneurship in Europe : venture failure and the timing of telecommunications reform (2003) 0.00
    0.0010304097 = product of:
      0.004121639 = sum of:
        0.004121639 = weight(_text_:information in 3566) [ClassicSimilarity], result of:
          0.004121639 = score(doc=3566,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.0671879 = fieldWeight in 3566, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=3566)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: JASIST 55(2004) no.2, S.181-182 (J. Scholl): "The book is based an a doctoral thesis titled "Global opportunity and national political economy: The development of internet ventures in Germany," which was supervised by Razeen Sally and accepted at the International Relations Department of the London School of Economics & Political Science, UK, in 2002. Its primary audience, although it is certainly of interest to policy makers, trade press journalists, and industry practitioners, is the academic community, and, in particular, (international) policy, business, business history, information technology, and information science scholars. The book's self-stated purpose is to explain "why Europe, despite initiating a tremendous amount of change ... failed to produce independent internet ventures of note" (p. 1) in contrast to the United States, where Internet start-ups such as Amazon.com, eBay, E*trade, and Yahoo managed to survive the notorious dot.com shakeout of 200I-2002. A few pages down, the objective is restated as "to explore the hypothesis of a global opportunity for technology innovation delivered via the internet and to explain Europe's entrepreneurial response" (p. 4). As a proxy case for Europe, the study provides a broad account of the changing legal and socioeconomic setting during the phase of early Internet adoption and development in Germany throughout the 1990s. The author highlights and details various facets of the entrepreneurial opportunity and compares the German case in some detail to corresponding developments in Sweden. Waesche concludes that starting an Internet business in Germany during that particular period of time was a "wrong country, wrong time" (p. I86) proposition.
    Waesche sparsely Sketches out a theoretical framework for his study combining "network thinking," which he Claims to stand in the Schumpeterian research tradition, with classical institutional theory a la Max Weber. It is not clear, though, how this theory has guided his empirical research. No detailed hypotheses are presented, which would further clarify what was studied. Beyond the rudimentary framework, the author presents a concept of "refraction" denoting the "distorting effect national institutions have an a global innovation opportunity" (p. 17). Again, no hypotheses or measures for this concept are developed. No indication is given about which specific academic contribution was intended to be made and which particular gap of knowledge was attempted to be filled. Waesche's book would have greatly benefited from a more sharply posed and more detailed set of research questions. Instead we leam many details about the German situation in general and about the perceptions of individual players, particularly managerial personnel, in entrepreneurial Internet businesses in a specific Situation within a relatively short period of time. While many of those details are interesting in their own right, the reader is left wondering what the study's novelty is, what it specifically uncovered, what the frame of reference was, and what was finally learned. Contrary to its Claim and unlike a Chandlerian treatment of business history, the study does not explain, it rather just deseribes a particular historical situation. Consequently, the author refrains from presenting any new theory or prescriptive framework in his concluding remarks, but rather briefly revisits and summarizes the presening chapters. The study's empirical basis consists of two surveys with Sample sizes of 123 and 30 as well as a total of 68 interviews. The surveys and interviews were mostly completed between July of 1997 and November of 1999. Although descriptive statistics and detailed demographic information is provided in the appendix, the questionnaires and interview protocols are not included, making it difficult to follow the research undertaking. In summary, while undeniably a number of interesting and illustrative details regarding early Internet entrepreneurship in Germany are accounted for in Waesche's book, it would have provided a much stronger academic contribution had it developed a sound theory upfront and then empirically tested that theory. Alternatively the author could have singled out certain gaps in existing theory, and then attempted to fill those gaps by providing empirical evidence. In either case, he would have almost inevitably arrived at new insights directing to further study."
  9. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.00
    8.923609E-4 = product of:
      0.0035694437 = sum of:
        0.0035694437 = weight(_text_:information in 3391) [ClassicSimilarity], result of:
          0.0035694437 = score(doc=3391,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.058186423 = fieldWeight in 3391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3391)
      0.25 = coord(1/4)
    
    Content
    Short Papers * A Database Backend for OWL, Jörg Henss, Joachim Kleb and Stephan Grimm. * Unifying SysML and OWL, Henson Graves. * The OWLlink Protocol, Thorsten Liebig, Marko Luther and Olaf Noppens. * A Reasoning Broker Framework for OWL, Juergen Bock, Tuvshintur Tserendorj, Yongchun Xu, Jens Wissmann and Stephan Grimm. * Change Representation For OWL 2 Ontologies, Raul Palma, Peter Haase, Oscar Corcho and Asunción Gómez-Pérez. * Practical Aspects of Query Rewriting for OWL 2, Héctor Pérez-Urbina, Ian Horrocks and Boris Motik. * CSage: Use of a Configurable Semantically Attributed Graph Editor as Framework for Editing and Visualization, Lawrence Levin. * A Conformance Test Suite for the OWL 2 RL/RDF Rules Language and the OWL 2 RDF-Based Semantics, Michael Schneider and Kai Mainzer. * Improving the Data Quality of Relational Databases using OBDA and OWL 2 QL, Olivier Cure. * Temporal Classes and OWL, Natalya Keberle. * Using Ontologies for Medical Image Retrieval - An Experiment, Jasmin Opitz, Bijan Parsia and Ulrike Sattler. * Task Representation and Retrieval in an Ontology-Guided Modelling System, Yuan Ren, Jens Lemcke, Andreas Friesen, Tirdad Rahmani, Srdjan Zivkovic, Boris Gregorcic, Andreas Bartho, Yuting Zhao and Jeff Z. Pan. * A platform for reasoning with OWL-EL knowledge bases in a Peer-to-Peer environment, Alexander De Leon and Michel Dumontier. * Axiomé: a Tool for the Elicitation and Management of SWRL Rules, Saeed Hassanpour, Martin O'Connor and Amar Das. * SQWRL: A Query Language for OWL, Martin O'Connor and Amar Das. * Classifying ELH Ontologies In SQL Databases, Vincent Delaitre and Yevgeny Kazakov. * A Semantic Web Approach to Represent and Retrieve Information in a Corporate Memory, Ana B. Rios-Alvarado, R. Carolina Medina-Ramirez and Ricardo Marcelin-Jimenez. * Towards a Graphical Notation for OWL 2, Elisa Kendall, Roy Bell, Roger Burkhart, Mark Dutra and Evan Wallace.
  10. Dodge, M.: What does the Internet look like, Jellyfish perhaps? : Exploring a visualization of the Internet by Young Hyun of CAIDA (2001) 0.00
    7.4363407E-4 = product of:
      0.0029745363 = sum of:
        0.0029745363 = weight(_text_:information in 1554) [ClassicSimilarity], result of:
          0.0029745363 = score(doc=1554,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.048488684 = fieldWeight in 1554, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1554)
      0.25 = coord(1/4)
    
    Content
    "The Internet is often likened to an organic entity and this analogy seems particularly appropriate in the light of some striking new visualizations of the complex mesh of Internet pathways. The images are results of a new graph visualization tool, code-named Walrus, being developed by researcher, Young Hyun, at the Cooperative Association for Internet Data Analysis (CAIDA) [1]. Although Walrus is still in early days of development, I think these preliminary results are some of the most intriguing and evocative images of the Internet's structure that we have seen in last year or two. A few years back I spent an enjoyable afternoon at the Monterey Bay Aquarium and I particularly remember a stunning exhibit of jellyfish, which were illuminated with UV light to show their incredibly delicate organic structures, gently pulsing in tanks of inky black water. Jellyfish are some of the strangest, alien, and yet most beautiful, living creatures [2]. Having looked at the Walrus images I began to wonder, perhaps the backbone networks of the Internet look like jellyfish? The image above is a screengrab of a Walrus visualization of a huge graph. The graph data in this particular example depicts Internet topology, as measured by CAIDA's skitter monitor [3] based in London, showing 535,000-odd Internet nodes and over 600,000 links. The nodes, represented by the yellow dots, are a large sample of computers from across the whole range of Internet addresses. Walrus is an interactive visualization tool that allows the analyst to view massive graphs from any position. The graph is projected inside a 3D sphere using a special kind of space based hyperbolic geometry. This is a non-Euclidean space, which has useful distorting properties of making elements at the center of the display much larger than those on the periphery. You interact with the graph in Walrus by selecting a node of interest, which is smoothly moved into the center of the display, and that region of the graph becomes greatly enlarged, enabling you to focus on the fine detail. Yet the rest of the graph remains visible, providing valuable context of the overall structure. (There are some animations available on the website showing Walrus graphs being moved, which give some sense of what this is like.) Hyperbolic space projection is commonly know as "focus+context" in the field of information visualization and has been used to display all kinds of data that can be represented as large graphs in either two and three dimensions [4]. It can be thought of as a moveable fish-eye lens. The Walrus visualization tool draws much from the hyperbolic research by Tamara Munzner [5] as part of her PhD at Stanford. (Map of the Month examined some of Munzner's work from 1996 in an earlier article, Internet Arcs Around The Globe.) Walrus is being developed as a general-purpose visualization tool able to cope with massive directed graphs, in the order of a million nodes. Providing useful and interactively useable visualization of such large volumes of graph data is a tough challenge and is particularly apposite to the task of mapping of Internet backbone infrastructures. In a recent email Map of the Month asked Walrus developer Young Hyun what had been the hardest part of the project thus far. "The greatest difficulty was in determining precisely what Walrus should be about," said Hyun. Crucially "... we had to face the question of what it means to visualize a large graph. It would defeat the aim of a visualization to overload a user with the large volume of data that is likely to be associated with a large graph." I think the preliminary results available show that Walrus is heading in right direction tackling these challenges.
  11. Slatin, J.M.; Rush, S.: Maximum accessibility : Making your Web site more usable for everyone (2003) 0.00
    7.4363407E-4 = product of:
      0.0029745363 = sum of:
        0.0029745363 = weight(_text_:information in 2996) [ClassicSimilarity], result of:
          0.0029745363 = score(doc=2996,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.048488684 = fieldWeight in 2996, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2996)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Information - Wissenschaft und Praxis 55(2004) H.7, S.431-432 (W. Schweibenz): "Maximum Accessibility ist ein Buch, das die barrierefreie Zugänglichkeit (engl. accessibility) von Web-Sites für Menschen mit Behinderungen behandelt - ein Thema das im deutschsprachigen Raum noch zu wenig öffentliche Beachtung findet. Dabei gewinnt das Thema zunehmend an Aktualität. In Deutschland sind beispielsweise die Einrichtungen des Bundes bzw. der Länder betroffen, die Internet-Angebote unterhalten. Denn die Barrierefreie Informationstechnik-Verordnung - BITV für Bundesbehörden bzw. die Landesverordnungen für Landesbehörden und teilweise für Kommunen und Landkreise schreiben vor, dass bis zum 31. Dezember 2005 die Internet-Angebote der öffentlichen Hand (soweit die Verordnungen für sie gelten), barrierefrei zu gestalten sind. Weiterführende Informationen zu den juristischen Aspekten der Barrierefreiheit für die Bundesrepublik Deutschland bietet Drewes (2004a, 2004b) sowie die Web-Angebote der Initiativen Einfach für alle und WoB11. In der Schweiz regeln das Bundesgesetz über die Beseitigung von Benachteiligungen von Menschen mit Behinderungen sowie die zugehörige Behindertengleichstellungsverordnung Fragen zur Barrierefreiheit von Web-Angeboten. Dabei ist zu beachten, dass die schweizerischen Bestimmungen weiterreichend sind als die deutschen. So dürfen in der Schweiz private Anbieter, die Dienstleistungen öffentlich anbieten, Behinderte nicht diskriminieren. Daraus lässt sich jedoch kein direkter Anspruch auf Barrierefreiheit gegen private Anbieter ableiten. Für Österreich steht derzeit eine detaillierte gesetzliche Regelung der Barrierefreiheit noch aus, bisher ist im E-GovernmentGesetz lediglich eine Absichtserklärung ab dem Jahr 2008 enthalten. Eine gesetzliche Regelung ist aber zu erwarten, weil entsprechende Vorschriften der Europäischen Union barrierefreie Web-Angebote für die Verwaltungen ihrer Mitgliedsstaaten vorsehen. Umfangreiche und verständliche Informationen zum Thema Barrierefreiheit in deutscher Sprache bietet der Leitfaden des deutschen Bundesamts für Sicherheit in der Informationstechnik (2003). Einen Einstieg in die barrierefreie Web-Entwicklung bietet das Online-Tutorial von Jan Eric Hellbusch, das im Oktober 2004 als Buch erscheinen wird. Die Mailingliste Web Accessibility des Kompetenzzentrum BIKA-FIT des Fraunhofer-Instituts ist eine deutschsprachige Plattform für den Austausch zwischen Praktikern und Interessierten. Soweit die einführenden Worte, nun zum eigentlichen Thema, dem Buch Maximum Accessibility. Die Autoren, der blinde John Slatin und und die Web-Designerin Sharron Rush, legen Wert darauf festzustellen, dass es nicht darum geht, für eine kleine Gruppe von Benutzern einen zusätzlichen Aufwand bei der Erstellung von Web-Angeboten zu betreiben, sondern dass vielmehr eine sehr große Zahl von Benutzern betroffen ist, nämlich rund 54 Millionen Amerikaner und 37 Millionen Europäer. Darüber hinaus betonen die Autoren, dass Barrierefreiheit für alle Benutzer gleichermaßen wichtig ist, denn Web-Sites, die für Menschen mit Behinderungen einfach zu benutzen sind, sind es auch für alle Benutzer. Dies gilt auch für den Zugang mit mobilen Geräten, die auf textbasiertes Browsen angewiesen sind. Darüber hinaus sind barrierefreie Web-Seiten auch suchmaschinenoptimiert, weil ihre Inhalte für die Volltextindexierung optimal aufbereitet sind.

Types

  • a 322
  • m 68
  • s 32
  • el 22
  • b 2
  • More… Less…

Subjects

Classifications