Search (19 results, page 1 of 1)

  • × theme_ss:"Internet"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Van de Sompel, H.; Beit-Arie, O.: Generalizing the OpenURL framework beyond references to scholarly works : the Bison-Futé model (2001) 0.02
    0.01804435 = product of:
      0.09022175 = sum of:
        0.09022175 = weight(_text_:context in 1223) [ClassicSimilarity], result of:
          0.09022175 = score(doc=1223,freq=10.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.511974 = fieldWeight in 1223, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1223)
      0.2 = coord(1/5)
    
    Abstract
    This paper introduces the Bison-Futé model, a conceptual generalization of the OpenURL framework for open and context-sensitive reference linking in the web-based scholarly information environment. The Bison-Futé model is an abstract framework that identifies and defines components that are required to enable open and context-sensitive linking on the web in general. It is derived from experience gathered from the deployment of the OpenURL framework over the course of the past year. It is a generalization of the current OpenURL framework in several aspects. It aims to extend the scope of open and context-sensitive linking beyond web-based scholarly information. In addition, it offers a generalization of the manner in which referenced items -- as well as the context in which these items are referenced -- can be described for the specific purpose of open and context-sensitive linking. The Bison-Futé model is not suggested as a replacement of the OpenURL framework. On the contrary: it confirms the conceptual foundations of the OpenURL framework and, at the same time, it suggests directions and guidelines as to how the current OpenURL specifications could be extended to become applicable beyond the scholarly information environment.
  2. Stoklasova, B.; Balikova, M.; Celbová, L.: Relationship between subject gateways and national bibliographies in international context (engl. Fassung) (2003) 0.02
    0.016139356 = product of:
      0.080696784 = sum of:
        0.080696784 = weight(_text_:context in 1938) [ClassicSimilarity], result of:
          0.080696784 = score(doc=1938,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.45792344 = fieldWeight in 1938, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.078125 = fieldNorm(doc=1938)
      0.2 = coord(1/5)
    
  3. Dodge, M.: What does the Internet look like, Jellyfish perhaps? : Exploring a visualization of the Internet by Young Hyun of CAIDA (2001) 0.02
    0.016072167 = product of:
      0.04018042 = sum of:
        0.028530622 = weight(_text_:context in 1554) [ClassicSimilarity], result of:
          0.028530622 = score(doc=1554,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.16190039 = fieldWeight in 1554, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1554)
        0.011649796 = weight(_text_:system in 1554) [ClassicSimilarity], result of:
          0.011649796 = score(doc=1554,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.08699492 = fieldWeight in 1554, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1554)
      0.4 = coord(2/5)
    
    Content
    "The Internet is often likened to an organic entity and this analogy seems particularly appropriate in the light of some striking new visualizations of the complex mesh of Internet pathways. The images are results of a new graph visualization tool, code-named Walrus, being developed by researcher, Young Hyun, at the Cooperative Association for Internet Data Analysis (CAIDA) [1]. Although Walrus is still in early days of development, I think these preliminary results are some of the most intriguing and evocative images of the Internet's structure that we have seen in last year or two. A few years back I spent an enjoyable afternoon at the Monterey Bay Aquarium and I particularly remember a stunning exhibit of jellyfish, which were illuminated with UV light to show their incredibly delicate organic structures, gently pulsing in tanks of inky black water. Jellyfish are some of the strangest, alien, and yet most beautiful, living creatures [2]. Having looked at the Walrus images I began to wonder, perhaps the backbone networks of the Internet look like jellyfish? The image above is a screengrab of a Walrus visualization of a huge graph. The graph data in this particular example depicts Internet topology, as measured by CAIDA's skitter monitor [3] based in London, showing 535,000-odd Internet nodes and over 600,000 links. The nodes, represented by the yellow dots, are a large sample of computers from across the whole range of Internet addresses. Walrus is an interactive visualization tool that allows the analyst to view massive graphs from any position. The graph is projected inside a 3D sphere using a special kind of space based hyperbolic geometry. This is a non-Euclidean space, which has useful distorting properties of making elements at the center of the display much larger than those on the periphery. You interact with the graph in Walrus by selecting a node of interest, which is smoothly moved into the center of the display, and that region of the graph becomes greatly enlarged, enabling you to focus on the fine detail. Yet the rest of the graph remains visible, providing valuable context of the overall structure. (There are some animations available on the website showing Walrus graphs being moved, which give some sense of what this is like.) Hyperbolic space projection is commonly know as "focus+context" in the field of information visualization and has been used to display all kinds of data that can be represented as large graphs in either two and three dimensions [4]. It can be thought of as a moveable fish-eye lens. The Walrus visualization tool draws much from the hyperbolic research by Tamara Munzner [5] as part of her PhD at Stanford. (Map of the Month examined some of Munzner's work from 1996 in an earlier article, Internet Arcs Around The Globe.) Walrus is being developed as a general-purpose visualization tool able to cope with massive directed graphs, in the order of a million nodes. Providing useful and interactively useable visualization of such large volumes of graph data is a tough challenge and is particularly apposite to the task of mapping of Internet backbone infrastructures. In a recent email Map of the Month asked Walrus developer Young Hyun what had been the hardest part of the project thus far. "The greatest difficulty was in determining precisely what Walrus should be about," said Hyun. Crucially "... we had to face the question of what it means to visualize a large graph. It would defeat the aim of a visualization to overload a user with the large volume of data that is likely to be associated with a large graph." I think the preliminary results available show that Walrus is heading in right direction tackling these challenges.
    What Is CAIDA? Association for Internet Data Analysis, started in 1997 and is based in the San Diego Supercomputer Center. CAIDA is led by KC Claffy along with a staff of serious Net techie researchers and grad students, and they are one of the worlds leading teams of academic researchers studying how the Internet works [6] . Their mission is "to provide a neutral framework for promoting greater cooperation in developing and deploying Internet measurement, analysis, and visualization tools that will support engineering and maintaining a robust, scaleable global Internet infrastructure." In addition to the Walrus visualization tool and the skitter monitoring system which we have touched on here, CAIDA has many other interesting projects mapping the infrastructure and operations of the global Internet. Two of my particular favorite visualization projects developed at CAIDA are MAPNET and Plankton [7] . MAPNET provides a useful interactive tool for mapping ISP backbones onto real-world geography. You can select from a range of commercial and research backbones and compare their topology of links overlaid on the same map. (The major problem with MAPNET is that is based on static database of ISP backbones links, which has unfortunately become obsolete over time.) Plankton, developed by CAIDA researchers Bradley Huffaker and Jaeyeon Jung, is an interactive tool for visualizing the topology and traffic on the global hierarchy of Web caches.
  4. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.02
    0.0152755175 = product of:
      0.038188793 = sum of:
        0.024209036 = weight(_text_:context in 3391) [ClassicSimilarity], result of:
          0.024209036 = score(doc=3391,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.13737704 = fieldWeight in 3391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3391)
        0.013979756 = weight(_text_:system in 3391) [ClassicSimilarity], result of:
          0.013979756 = score(doc=3391,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.104393914 = fieldWeight in 3391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3391)
      0.4 = coord(2/5)
    
    Content
    Long Papers * Suggestions for OWL 3, Pascal Hitzler. * BestMap: Context-Aware SKOS Vocabulary Mappings in OWL 2, Rinke Hoekstra. * Mechanisms for Importing Modules, Bijan Parsia, Ulrike Sattler and Thomas Schneider. * A Syntax for Rules in OWL 2, Birte Glimm, Matthew Horridge, Bijan Parsia and Peter Patel-Schneider. * PelletSpatial: A Hybrid RCC-8 and RDF/OWL Reasoning and Query Engine, Markus Stocker and Evren Sirin. * The OWL API: A Java API for Working with OWL 2 Ontologies, Matthew Horridge and Sean Bechhofer. * From Justifications to Proofs for Entailments in OWL, Matthew Horridge, Bijan Parsia and Ulrike Sattler. * A Solution for the Man-Man Problem in the Family History Knowledge Base, Dmitry Tsarkov, Ulrike Sattler and Robert Stevens. * Towards Integrity Constraints in OWL, Evren Sirin and Jiao Tao. * Processing OWL2 ontologies using Thea: An application of logic programming, Vangelis Vassiliadis, Jan Wielemaker and Chris Mungall. * Reasoning in Metamodeling Enabled Ontologies, Nophadol Jekjantuk, Gerd Gröner and Jeff Z. Pan.
    Short Papers * A Database Backend for OWL, Jörg Henss, Joachim Kleb and Stephan Grimm. * Unifying SysML and OWL, Henson Graves. * The OWLlink Protocol, Thorsten Liebig, Marko Luther and Olaf Noppens. * A Reasoning Broker Framework for OWL, Juergen Bock, Tuvshintur Tserendorj, Yongchun Xu, Jens Wissmann and Stephan Grimm. * Change Representation For OWL 2 Ontologies, Raul Palma, Peter Haase, Oscar Corcho and Asunción Gómez-Pérez. * Practical Aspects of Query Rewriting for OWL 2, Héctor Pérez-Urbina, Ian Horrocks and Boris Motik. * CSage: Use of a Configurable Semantically Attributed Graph Editor as Framework for Editing and Visualization, Lawrence Levin. * A Conformance Test Suite for the OWL 2 RL/RDF Rules Language and the OWL 2 RDF-Based Semantics, Michael Schneider and Kai Mainzer. * Improving the Data Quality of Relational Databases using OBDA and OWL 2 QL, Olivier Cure. * Temporal Classes and OWL, Natalya Keberle. * Using Ontologies for Medical Image Retrieval - An Experiment, Jasmin Opitz, Bijan Parsia and Ulrike Sattler. * Task Representation and Retrieval in an Ontology-Guided Modelling System, Yuan Ren, Jens Lemcke, Andreas Friesen, Tirdad Rahmani, Srdjan Zivkovic, Boris Gregorcic, Andreas Bartho, Yuting Zhao and Jeff Z. Pan. * A platform for reasoning with OWL-EL knowledge bases in a Peer-to-Peer environment, Alexander De Leon and Michel Dumontier. * Axiomé: a Tool for the Elicitation and Management of SWRL Rules, Saeed Hassanpour, Martin O'Connor and Amar Das. * SQWRL: A Query Language for OWL, Martin O'Connor and Amar Das. * Classifying ELH Ontologies In SQL Databases, Vincent Delaitre and Yevgeny Kazakov. * A Semantic Web Approach to Represent and Retrieve Information in a Corporate Memory, Ana B. Rios-Alvarado, R. Carolina Medina-Ramirez and Ricardo Marcelin-Jimenez. * Towards a Graphical Notation for OWL 2, Elisa Kendall, Roy Bell, Roger Burkhart, Mark Dutra and Evan Wallace.
  5. McQueen, T.F.; Fleck, R.A. Jr.: Changing patterns of Internet usage and challenges at colleges and universities (2005) 0.01
    0.0092261685 = product of:
      0.04613084 = sum of:
        0.04613084 = weight(_text_:system in 769) [ClassicSimilarity], result of:
          0.04613084 = score(doc=769,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.34448233 = fieldWeight in 769, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=769)
      0.2 = coord(1/5)
    
    Abstract
    Increased enrollments, changing student expectations, and shifting patterns of Internet access and usage continue to generate resource and administrative challenges for colleges and universities. Computer center staff and college administrators must balance increased access demands, changing system loads, and system security within constrained resources. To assess the changing academic computing environment, computer center directors from several geographic regions were asked to respond to an online questionnaire that assessed patterns of usage, resource allocation, policy formulation, and threats. Survey results were compared with data from a study conducted by the authors in 1999. The analysis includes changing patterns in Internet usage, access, and supervision. The paper also presents details of usage by institutional type and application as well as recommendations for more precise resource assessment by college administrators.
  6. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.01
    0.008970084 = product of:
      0.044850416 = sum of:
        0.044850416 = weight(_text_:index in 3752) [ClassicSimilarity], result of:
          0.044850416 = score(doc=3752,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.24139762 = fieldWeight in 3752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3752)
      0.2 = coord(1/5)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scien-tific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estima-tion based on informetric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergman's size estimate of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the Academic In-visible Web.
  7. Robbio, A. de; Maguolo, D.; Marini, A.: Scientific and general subject classifications in the digital world (2001) 0.01
    0.0071760663 = product of:
      0.03588033 = sum of:
        0.03588033 = weight(_text_:index in 2) [ClassicSimilarity], result of:
          0.03588033 = score(doc=2,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.1931181 = fieldWeight in 2, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.03125 = fieldNorm(doc=2)
      0.2 = coord(1/5)
    
    Abstract
    In the present work we discuss opportunities, problems, tools and techniques encountered when interconnecting discipline-specific subject classifications, primarily organized as search devices in bibliographic databases, with general classifications originally devised for book shelving in public libraries. We first state the fundamental distinction between topical (or subject) classifications and object classifications. Then we trace the structural limitations that have constrained subject classifications since their library origins, and the devices that were used to overcome the gap with genuine knowledge representation. After recalling some general notions on structure, dynamics and interferences of subject classifications and of the objects they refer to, we sketch a synthetic overview on discipline-specific classifications in Mathematics, Computing and Physics, on one hand, and on general classifications on the other. In this setting we present The Scientific Classifications Page, which collects groups of Web pages produced by a pool of software tools for developing hypertextual presentations of single or paired subject classifications from sequential source files, as well as facilities for gathering information from KWIC lists of classification descriptions. Further we propose a concept-oriented methodology for interconnecting subject classifications, with the concrete support of a relational analysis of the whole Mathematics Subject Classification through its evolution since 1959. Finally, we recall a very basic method for interconnection provided by coreference in bibliographic records among index elements from different systems, and point out the advantages of establishing the conditions of a more widespread application of such a method. A part of these contents was presented under the title Mathematics Subject Classification and related Classifications in the Digital World at the Eighth International Conference Crimea 2001, "Libraries and Associations in the Transient World: New Technologies and New Forms of Cooperation", Sudak, Ukraine, June 9-17, 2001, in a special session on electronic libraries, electronic publishing and electronic information in science chaired by Bernd Wegner, Editor-in-Chief of Zentralblatt MATH.
  8. Alfaro, L.de: How (much) to trust Wikipedia (2008) 0.01
    0.0065901205 = product of:
      0.032950602 = sum of:
        0.032950602 = weight(_text_:system in 2138) [ClassicSimilarity], result of:
          0.032950602 = score(doc=2138,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.24605882 = fieldWeight in 2138, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2138)
      0.2 = coord(1/5)
    
    Abstract
    The Wikipedia is a collaborative encyclopedia: anyone can contribute to its articles simply by clicking on an "edit'' button. The open nature of the Wikipedia has been key to its success, but has a flip side: if anyone can edit, how can readers know whether to trust its content? To help answer this question, we have developed a reputation system for Wikipedia authors, and a trust system for Wikipedia text. Authors gain reputation when their contributions are long-lived, and they lose reputation when their contributions are undone in short order. Each word in the Wikipedia is assigned a value of trust that depends on the reputation of its author, as well as on the reputation of the authors that subsequently revised the text where the word appears. To validate our algorithms, we show that reputation and trust have good predictive value: higher-reputation authors are more likely to give lasting contributions, and higher-trust text is less likely to be edited. The trust can be visualized via an intuitive coloring of the text background. The coloring provides an effective way of spotting attempts to tamper with Wikipedia information. A trust-colored version of the entire English Wikipedia can be browsed at http://trust.cse.ucsc.edu/
  9. Wilson, R.: ¬The role of ontologies in teaching and learning (2004) 0.01
    0.0064557428 = product of:
      0.032278713 = sum of:
        0.032278713 = weight(_text_:context in 3387) [ClassicSimilarity], result of:
          0.032278713 = score(doc=3387,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.18316938 = fieldWeight in 3387, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.03125 = fieldNorm(doc=3387)
      0.2 = coord(1/5)
    
    Abstract
    Ontologies are currently a buzzword in many communities, hailed as a mechanism for making better use of the Web. They offer a shared definition of a domain that can be understood by computers, enabling them to complete more meaningful tasks. Although ontologies of different descriptions have been in development and use for some time, it is their potential as a key technology in the Semantic Web which is responsible for the current wave of interest. Communities have different expectations of the Semantic Web and how it will be realised, but it is generally believed that ontologies will play a major role. In light of their potential in this new context, much current effort is focusing an developing languages and tools. OWL (Web Ontology Language) has recently become a standard, and builds an top of existing Web languages such as XML and RDF to offer a high degree of expressiveness. A variety of tools are emerging for creating, editing and managing ontologies in OWL. Ontologies have a range of potential benefits and applications in further and higher education, including the sharing of information across educational systems, providing frameworks for learning object reuse, and enabling intelligent and personalised student support. The difficulties inherent in creating a model of a domain are being tackled, and the communities involved in ontology development are working together to achieve their vision of the Semantic Web. This Technology and Standards Watch report discusses ontologies and their role in the Semantic Web, with a special focus an their implications for teaching and learning. This report will introduce ontologies to the further and higher education community, explaining why they are being developed, what they hope to achieve, and their potential benefits to the community. Current ontology tools and standards will be described, and the emphasis will be an introducing the technology to a new audience and exploring its risks and potential applications in teaching and learning. At a time when educational programmes based an ontologies are starting to be developed, the author hopes to increase understanding of the key issues in the wider community.
  10. Lange, H.: Wissensdatenbanken im Netz : Internetrecherche für das Projekt EFIL (2000) 0.00
    0.004650343 = product of:
      0.023251716 = sum of:
        0.023251716 = product of:
          0.069755144 = sum of:
            0.069755144 = weight(_text_:29 in 6475) [ClassicSimilarity], result of:
              0.069755144 = score(doc=6475,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.46638384 = fieldWeight in 6475, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6475)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    15. 8.2001 10:29:54
  11. Rudner, L.: Who is going to mine digital library resources? : anf how? (2000) 0.00
    0.004650343 = product of:
      0.023251716 = sum of:
        0.023251716 = product of:
          0.069755144 = sum of:
            0.069755144 = weight(_text_:29 in 6800) [ClassicSimilarity], result of:
              0.069755144 = score(doc=6800,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.46638384 = fieldWeight in 6800, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6800)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    26.12.2011 16:38:29
  12. Schmidt, J.; Horn, A.; Thorsen, B.: Australian Subject Gateways, the successes and the challenges (2003) 0.00
    0.004650343 = product of:
      0.023251716 = sum of:
        0.023251716 = product of:
          0.069755144 = sum of:
            0.069755144 = weight(_text_:29 in 1745) [ClassicSimilarity], result of:
              0.069755144 = score(doc=1745,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.46638384 = fieldWeight in 1745, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1745)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    26.12.2011 12:46:29
  13. Wesch, M.: Web 2.0 ... The Machine is Us/ing Us (2006) 0.00
    0.0030723398 = product of:
      0.015361699 = sum of:
        0.015361699 = product of:
          0.046085097 = sum of:
            0.046085097 = weight(_text_:22 in 3478) [ClassicSimilarity], result of:
              0.046085097 = score(doc=3478,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.30952093 = fieldWeight in 3478, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3478)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    5. 1.2008 19:22:48
  14. Weber, S.: Kommen nach den "science wars" die "reference wars"? : Wandel der Wissenskultur durch Netzplagiate und das Google-Wikipedia-Monopol (2005) 0.00
    0.0027127003 = product of:
      0.013563501 = sum of:
        0.013563501 = product of:
          0.0406905 = sum of:
            0.0406905 = weight(_text_:29 in 4023) [ClassicSimilarity], result of:
              0.0406905 = score(doc=4023,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.27205724 = fieldWeight in 4023, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4023)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    29. 9.2005 17:18:36
  15. Schneider, R.: Bibliothek 1.0, 2.0 oder 3.0? (2008) 0.00
    0.0026882975 = product of:
      0.013441487 = sum of:
        0.013441487 = product of:
          0.04032446 = sum of:
            0.04032446 = weight(_text_:22 in 6122) [ClassicSimilarity], result of:
              0.04032446 = score(doc=6122,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.2708308 = fieldWeight in 6122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6122)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Abstract
    Noch ist nicht entschieden mit welcher Vehemenz das sogenannte Web 2.0 die Bibliotheken verändern wird. Allerdings wird hier und da bereits mit Bezugnahme auf das sogenannte Semantic Web von einer dritten und mancherorts von einer vierten Generation des Web gesprochen. Der Vortrag hinterfragt kritisch, welche Konzepte sich hinter diesen Bezeichnungen verbergen und geht der Frage nach, welche Herausforderungen eine Übernahme dieser Konzepte für die Bibliothekswelt mit sich bringen würde. Vgl. insbes. Folie 22 mit einer Darstellung von der Entwicklung vom Web 1.0 zum Web 4.0
  16. Schetsche, M.: ¬Die ergoogelte Wirklichkeit : Verschwörungstheorien und das Internet (2005) 0.00
    0.0023042548 = product of:
      0.011521274 = sum of:
        0.011521274 = product of:
          0.03456382 = sum of:
            0.03456382 = weight(_text_:22 in 3397) [ClassicSimilarity], result of:
              0.03456382 = score(doc=3397,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23214069 = fieldWeight in 3397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3397)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Abstract
    "Zweimal täglich googeln" empfiehlt Mathias Bröckers in seinem Buch "Verschwörungen, Verschwörungstheorien und die Geheimnisse des 11.9.". Der Band gilt den gutbürgerlichen Medien von FAZ bis Spiegel als Musterbeispiel krankhafter Verschwörungstheorie. Dabei wollte der Autor - nach eigenem Bekunden - keine Verschwörungstheorie zum 11. September vorlegen, sondern lediglich auf Widersprüche und Fragwürdigkeiten in den amtlichen Darstellungen und Erklärungen der US-Regierung zu jenem Terroranschlag hinweisen. Unabhängig davon, wie ernst diese Einlassungen des Autors zu nehmen sind, ist der "Fall Bröckers" für die Erforschung von Verschwörungstheorien unter zwei Aspekten interessant: Erstens geht der Band auf ein [[extern] ] konspirologisches Tagebuch zurück, das der Autor zwischen dem 13. September 2001 und dem 22. März 2002 für das Online-Magazin Telepolis verfasst hat; zweitens behauptet Bröckers in der Einleitung zum Buch, dass er für seine Arbeit ausschließlich über das Netz zugängliche Quellen genutzt habe. Hierbei hätte ihm Google unverzichtbare Dienste geleistet: Um an die Informationen in diesem Buch zu kommen, musste ich weder über besondere Beziehungen verfügen, noch mich mit Schlapphüten und Turbanträgern zu klandestinen Treffen verabreden - alle Quellen liegen offen. Sie zu finden, leistete mir die Internet-Suchmaschine Google unschätzbare Dienste. Mathias Bröckers
  17. Wirtz, B.: Deutschland online : unser Leben im Netz (2008) 0.00
    0.0015501144 = product of:
      0.0077505717 = sum of:
        0.0077505717 = product of:
          0.023251714 = sum of:
            0.023251714 = weight(_text_:29 in 4729) [ClassicSimilarity], result of:
              0.023251714 = score(doc=4729,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.15546128 = fieldWeight in 4729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4729)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Content
    "Kernaussagen - Die besondere Bedeutung der Informationstechnologie und Telekommunikation wird in den nächsten Jahren deutlich zunehmen. Bis zum Jahr 2015 soll sich der aktuelle Anteil am Bruttoinlandsprodukt auf fast 12 Prozent nahezu verdoppeln. - Die Zahl der Breitband-Anschlüsse wird erheblich ansteigen. Im Jahr 2010 sollen bereits über 21 Mio. Anschlüsse vorhanden sein und im Jahr 2015 mehr als 29 Mio. Anschlüsse. Das bedeutet, dass über 80 Prozent aller deutschen Haushalte 2015 einen Breitband-Anschluss haben werden. - Die starke Zunahme der Leistungsfähigkeit in Form der Bandbreiten wird sich bis 2015 fortsetzen. - Kommunikation, Unterhaltungsangebote und der E-Commerce werden zukünftig die wichtigsten Nutzungsformen im Breitband-Internet sein.
  18. Blosser, J.; Michaelson, R.; Routh. R.; Xia, P.: Defining the landscape of Web resources : Concluding Report of the BAER Web Resources Sub-Group (2000) 0.00
    0.0015361699 = product of:
      0.0076808496 = sum of:
        0.0076808496 = product of:
          0.023042548 = sum of:
            0.023042548 = weight(_text_:22 in 1447) [ClassicSimilarity], result of:
              0.023042548 = score(doc=1447,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.15476047 = fieldWeight in 1447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1447)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    21. 4.2002 10:22:31
  19. cis: Nationalbibliothek will das deutsche Internet kopieren (2008) 0.00
    0.0013441488 = product of:
      0.0067207436 = sum of:
        0.0067207436 = product of:
          0.02016223 = sum of:
            0.02016223 = weight(_text_:22 in 4609) [ClassicSimilarity], result of:
              0.02016223 = score(doc=4609,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.1354154 = fieldWeight in 4609, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4609)
          0.33333334 = coord(1/3)
      0.2 = coord(1/5)
    
    Date
    24.10.2008 14:19:22