Search (18 results, page 1 of 1)

  • × theme_ss:"Internet"
  • × type_ss:"el"
  • × year_i:[2000 TO 2010}
  1. Hitchcock, S.; Bergmark, D.; Brody, T.; Gutteridge, C.; Carr, L.; Hall, W.; Lagoze, C.; Harnad, S.: Open citation linking : the way forward (2002) 0.03
    0.029866911 = product of:
      0.059733823 = sum of:
        0.032089777 = weight(_text_:research in 1207) [ClassicSimilarity], result of:
          0.032089777 = score(doc=1207,freq=4.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.22288933 = fieldWeight in 1207, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1207)
        0.027644044 = product of:
          0.055288088 = sum of:
            0.055288088 = weight(_text_:network in 1207) [ClassicSimilarity], result of:
              0.055288088 = score(doc=1207,freq=2.0), product of:
                0.22473325 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.050463587 = queryNorm
                0.2460165 = fieldWeight in 1207, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1207)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The speed of scientific communication - the rate of ideas affecting other researchers' ideas - is increasing dramatically. The factor driving this is free, unrestricted access to research papers. Measurements of user activity in mature eprint archives of research papers such as arXiv have shown, for the first time, the degree to which such services support an evolving network of texts commenting on, citing, classifying, abstracting, listing and revising other texts. The Open Citation project has built tools to measure this activity, to build new archives, and has been closely involved with the development of the infrastructure to support open access on which these new services depend. This is the story of the project, intertwined with the concurrent emergence of the Open Archives Initiative (OAI). The paper describes the broad scope of the project's work, showing how it has progressed from early demonstrators of reference linking to produce Citebase, a Web-based citation and impact-ranked search service, and how it has supported the development of the EPrints.org software for building OAI-compliant archives. The work has been underpinned by analysis and experiments on the semantics of documents (digital objects) to determine the features required for formally perfect linking - instantiated as an application programming interface (API) for reference linking - that will enable other applications to build on this work in broader digital library information environments.
  2. Kubiszewski, I.; Cleveland, C.J.: ¬The Encyclopedia of Earth (2007) 0.02
    0.018001474 = product of:
      0.03600295 = sum of:
        0.013540106 = weight(_text_:science in 1170) [ClassicSimilarity], result of:
          0.013540106 = score(doc=1170,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.101861134 = fieldWeight in 1170, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1170)
        0.022462843 = weight(_text_:research in 1170) [ClassicSimilarity], result of:
          0.022462843 = score(doc=1170,freq=4.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.15602253 = fieldWeight in 1170, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1170)
      0.5 = coord(2/4)
    
    Abstract
    This illustrates a stark reality of the Web. There are many resources for environmental content, but there is no central repository of authoritative information that meets the needs of diverse user communities. The Encyclopedia of Earth aims to fill that niche by providing content that is both free and reliable. Still in its infancy, the EoE already is an integral part of the emerging effort to increase free and open access to trusted information on the Web. It is a trusted content source for authoritative indexes such as the Online Access to Research in the Environment Initiative, the Health InterNetwork Access to Research Initiative, the Open Education Resources Commons, Scirus, DLESE, WiserEarth, among others. Our initial Content Partners include the American Institute of Physics, the University of California Museum of Paleontology, TeacherServe®, the U.S. Geological Survey, the International Arctic Science Committee, the World Wildlife Fund, Conservation International, the Biodiversity Institute of Ontario, and the United Nations Environment Programme, to name just a few. The full partner list here can be found at <http://www.eoearth.org/article/Content_Partners>. We have a diversity of article types including standard subject articles, biographies, place-based entries, country profiles, and environmental classics. We recently launched our E-Book series, full-text, fully searchable books with internal hyperlinks to EoE articles. The eBooks include new releases by distinguished scholars as well as classics such as Walden and On the Origin of Species. Because history can be an important guide to the future, we have added an Environmental Classics section that includes such historical works as Energy from Fossil Fuels by M. King Hubbert and Undersea by Rachel Carson. Our services and features will soon be expanded. The EoE will soon be available in different languages giving a wider range of users access, users will be able to search it geographically or by a well-defined, expert created taxonomy, and teachers will be able to use the EoE to create unique curriculum for their courses.
  3. Dodge, M.: What does the Internet look like, Jellyfish perhaps? : Exploring a visualization of the Internet by Young Hyun of CAIDA (2001) 0.02
    0.01779609 = product of:
      0.03559218 = sum of:
        0.016044889 = weight(_text_:research in 1554) [ClassicSimilarity], result of:
          0.016044889 = score(doc=1554,freq=4.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.11144467 = fieldWeight in 1554, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1554)
        0.019547291 = product of:
          0.039094582 = sum of:
            0.039094582 = weight(_text_:network in 1554) [ClassicSimilarity], result of:
              0.039094582 = score(doc=1554,freq=4.0), product of:
                0.22473325 = queryWeight, product of:
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.050463587 = queryNorm
                0.17395994 = fieldWeight in 1554, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.4533744 = idf(docFreq=1398, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1554)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Content
    "The Internet is often likened to an organic entity and this analogy seems particularly appropriate in the light of some striking new visualizations of the complex mesh of Internet pathways. The images are results of a new graph visualization tool, code-named Walrus, being developed by researcher, Young Hyun, at the Cooperative Association for Internet Data Analysis (CAIDA) [1]. Although Walrus is still in early days of development, I think these preliminary results are some of the most intriguing and evocative images of the Internet's structure that we have seen in last year or two. A few years back I spent an enjoyable afternoon at the Monterey Bay Aquarium and I particularly remember a stunning exhibit of jellyfish, which were illuminated with UV light to show their incredibly delicate organic structures, gently pulsing in tanks of inky black water. Jellyfish are some of the strangest, alien, and yet most beautiful, living creatures [2]. Having looked at the Walrus images I began to wonder, perhaps the backbone networks of the Internet look like jellyfish? The image above is a screengrab of a Walrus visualization of a huge graph. The graph data in this particular example depicts Internet topology, as measured by CAIDA's skitter monitor [3] based in London, showing 535,000-odd Internet nodes and over 600,000 links. The nodes, represented by the yellow dots, are a large sample of computers from across the whole range of Internet addresses. Walrus is an interactive visualization tool that allows the analyst to view massive graphs from any position. The graph is projected inside a 3D sphere using a special kind of space based hyperbolic geometry. This is a non-Euclidean space, which has useful distorting properties of making elements at the center of the display much larger than those on the periphery. You interact with the graph in Walrus by selecting a node of interest, which is smoothly moved into the center of the display, and that region of the graph becomes greatly enlarged, enabling you to focus on the fine detail. Yet the rest of the graph remains visible, providing valuable context of the overall structure. (There are some animations available on the website showing Walrus graphs being moved, which give some sense of what this is like.) Hyperbolic space projection is commonly know as "focus+context" in the field of information visualization and has been used to display all kinds of data that can be represented as large graphs in either two and three dimensions [4]. It can be thought of as a moveable fish-eye lens. The Walrus visualization tool draws much from the hyperbolic research by Tamara Munzner [5] as part of her PhD at Stanford. (Map of the Month examined some of Munzner's work from 1996 in an earlier article, Internet Arcs Around The Globe.) Walrus is being developed as a general-purpose visualization tool able to cope with massive directed graphs, in the order of a million nodes. Providing useful and interactively useable visualization of such large volumes of graph data is a tough challenge and is particularly apposite to the task of mapping of Internet backbone infrastructures. In a recent email Map of the Month asked Walrus developer Young Hyun what had been the hardest part of the project thus far. "The greatest difficulty was in determining precisely what Walrus should be about," said Hyun. Crucially "... we had to face the question of what it means to visualize a large graph. It would defeat the aim of a visualization to overload a user with the large volume of data that is likely to be associated with a large graph." I think the preliminary results available show that Walrus is heading in right direction tackling these challenges.
    However, Hyun points out that it is still early days and over the next six months or so Walrus will be extended to include core functions beyond just visualizing raw topology graphs. For CAIDA, it is important to see performance measurements associated with the links; as Hyun notes, "you can imagine how important this is to our visualizations, given that we are almost never interested in the mere topology of a network." Walrus has not revealed much new scientific knowledge of the Internet thus far, although Hyun commented that the current visualization of topology "did make it easy to see the degree to which the network is in tangles how some nodes form large clusters and how they are seemingly interconnected in random ways." This random connectedness is perhaps what gives the Internet its organic look and feel. Of course this is not real shape of the Internet. One must always be wary when viewing and interpreting any particular graph visualization as much of the final "look and feel" results from subjective decisions of the analyst, rather than inherent in the underlying phenomena. As Hyun pointed out, "... the organic quality of the images derives almost entirely from the particular combination of the layout algorithm used and hyperbolic distortion." There is no inherently "natural" shape when visualizing massive data, such as the topology of the global Internet, in an abstract space. Somewhat like a jellyfish, maybe? ----
    What Is CAIDA? Association for Internet Data Analysis, started in 1997 and is based in the San Diego Supercomputer Center. CAIDA is led by KC Claffy along with a staff of serious Net techie researchers and grad students, and they are one of the worlds leading teams of academic researchers studying how the Internet works [6] . Their mission is "to provide a neutral framework for promoting greater cooperation in developing and deploying Internet measurement, analysis, and visualization tools that will support engineering and maintaining a robust, scaleable global Internet infrastructure." In addition to the Walrus visualization tool and the skitter monitoring system which we have touched on here, CAIDA has many other interesting projects mapping the infrastructure and operations of the global Internet. Two of my particular favorite visualization projects developed at CAIDA are MAPNET and Plankton [7] . MAPNET provides a useful interactive tool for mapping ISP backbones onto real-world geography. You can select from a range of commercial and research backbones and compare their topology of links overlaid on the same map. (The major problem with MAPNET is that is based on static database of ISP backbones links, which has unfortunately become obsolete over time.) Plankton, developed by CAIDA researchers Bradley Huffaker and Jaeyeon Jung, is an interactive tool for visualizing the topology and traffic on the global hierarchy of Web caches.
  4. Brooks, T.A.: Where is meaning when form is gone? : Knowledge representation an the Web (2001) 0.01
    0.013614539 = product of:
      0.054458156 = sum of:
        0.054458156 = weight(_text_:research in 3889) [ClassicSimilarity], result of:
          0.054458156 = score(doc=3889,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.37825575 = fieldWeight in 3889, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.09375 = fieldNorm(doc=3889)
      0.25 = coord(1/4)
    
    Source
    Information Research. 6(2001), no.2
  5. Choo, C.W.; Detlor, B.; Turnbull, D.: Information seeking on the Web : an integrated model of browsing and searching (2000) 0.01
    0.007941814 = product of:
      0.031767257 = sum of:
        0.031767257 = weight(_text_:research in 4438) [ClassicSimilarity], result of:
          0.031767257 = score(doc=4438,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.22064918 = fieldWeight in 4438, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4438)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents findings from a study of how knowledge workers use the Web to seek external information as part of their daily work. 34 users from 7 companies took part in the study. Participants were mainly IT-specialists, managers, and research/marketing/consulting staff working in organizations that included a large utility company; a major bank, and a consulting firm. Participants answered a detailed questionnaire and were interviewed individually in order to understand their information needs and information seeking preferences. A custom-developed WebTracker software application was installed on each of their work place PCs, and participants' Web-use activities were then recorded continuously during two-week periods
  6. Wesch, M.: Web 2.0 ... The Machine is Us/ing Us (2006) 0.01
    0.0068371193 = product of:
      0.027348477 = sum of:
        0.027348477 = product of:
          0.054696955 = sum of:
            0.054696955 = weight(_text_:22 in 3478) [ClassicSimilarity], result of:
              0.054696955 = score(doc=3478,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.30952093 = fieldWeight in 3478, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3478)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    5. 1.2008 19:22:48
  7. Networked knowledge organization systems (2001) 0.01
    0.0068072695 = product of:
      0.027229078 = sum of:
        0.027229078 = weight(_text_:research in 6473) [ClassicSimilarity], result of:
          0.027229078 = score(doc=6473,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.18912788 = fieldWeight in 6473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.046875 = fieldNorm(doc=6473)
      0.25 = coord(1/4)
    
    Content
    This issue of the Journal of Digital Information evolved from a workshop on Networked Knowledge Organization Systems (NKOS) held at the Fourth European Conference on Research and Advanced Technology for Digital Libraries (ECDL2000) in Lisbon during September 2000. The focus of the workshop was European NKOS initiatives and projects and options for global cooperation. Workshop organizers were Martin Doerr, Traugott Koch, Dougles Tudhope and Repke de Vries. This group has, with Traugott Koch as the main editor and with the help of Linda Hill, cooperated in the editorial tasks for this special issue
  8. Bailey, C.W. Jr.: Scholarly electronic publishing bibliography (2003) 0.01
    0.0068072695 = product of:
      0.027229078 = sum of:
        0.027229078 = weight(_text_:research in 1656) [ClassicSimilarity], result of:
          0.027229078 = score(doc=1656,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.18912788 = fieldWeight in 1656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.046875 = fieldNorm(doc=1656)
      0.25 = coord(1/4)
    
    Content
    Table of Contents 1 Economic Issues* 2 Electronic Books and Texts 2.1 Case Studies and History 2.2 General Works* 2.3 Library Issues* 3 Electronic Serials 3.1 Case Studies and History 3.2 Critiques 3.3 Electronic Distribution of Printed Journals 3.4 General Works* 3.5 Library Issues* 3.6 Research* 4 General Works* 5 Legal Issues 5.1 Intellectual Property Rights* 5.2 License Agreements 5.3 Other Legal Issues 6 Library Issues 6.1 Cataloging, Identifiers, Linking, and Metadata* 6.2 Digital Libraries* 6.3 General Works* 6.4 Information Integrity and Preservation* 7 New Publishing Models* 8 Publisher Issues 8.1 Digital Rights Management* 9 Repositories and E-Prints* Appendix A. Related Bibliographies by the Same Author Appendix B. About the Author
  9. Jacobsen, G.: Webarchiving internationally : interoperability in the future? (2007) 0.01
    0.0068072695 = product of:
      0.027229078 = sum of:
        0.027229078 = weight(_text_:research in 699) [ClassicSimilarity], result of:
          0.027229078 = score(doc=699,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.18912788 = fieldWeight in 699, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.046875 = fieldNorm(doc=699)
      0.25 = coord(1/4)
    
    Abstract
    Several national libraries are collecting parts of the Internet or planning to do so, but in order to render a complete impression of the Internet, web archives must be interoperable, enabling a user to make seamless searches. A questionnaire on this issue was sent to 95 national libraries. The answers show agreement with this goal and that web archiving is becoming more common. Partnering is a key ingredient in moving forward and a useful distinction is suggested in the labels curatorial partners (archives, museums) and technical partners (private companies, universities, other research institutions). Working with private, for-profit companies may also force national libraries to leave room for unorthodox thinking and experimenting. The biggest challenge right now is to make legal deposit, copyright and other legislation adapt to an Internet world, so we can preserve it and make it available to present and future generation.
  10. Weber, S.: Kommen nach den "science wars" die "reference wars"? : Wandel der Wissenskultur durch Netzplagiate und das Google-Wikipedia-Monopol (2005) 0.01
    0.006770053 = product of:
      0.027080212 = sum of:
        0.027080212 = weight(_text_:science in 4023) [ClassicSimilarity], result of:
          0.027080212 = score(doc=4023,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.20372227 = fieldWeight in 4023, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4023)
      0.25 = coord(1/4)
    
  11. Schneider, R.: Bibliothek 1.0, 2.0 oder 3.0? (2008) 0.01
    0.0059824795 = product of:
      0.023929918 = sum of:
        0.023929918 = product of:
          0.047859836 = sum of:
            0.047859836 = weight(_text_:22 in 6122) [ClassicSimilarity], result of:
              0.047859836 = score(doc=6122,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.2708308 = fieldWeight in 6122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6122)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Noch ist nicht entschieden mit welcher Vehemenz das sogenannte Web 2.0 die Bibliotheken verändern wird. Allerdings wird hier und da bereits mit Bezugnahme auf das sogenannte Semantic Web von einer dritten und mancherorts von einer vierten Generation des Web gesprochen. Der Vortrag hinterfragt kritisch, welche Konzepte sich hinter diesen Bezeichnungen verbergen und geht der Frage nach, welche Herausforderungen eine Übernahme dieser Konzepte für die Bibliothekswelt mit sich bringen würde. Vgl. insbes. Folie 22 mit einer Darstellung von der Entwicklung vom Web 1.0 zum Web 4.0
  12. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.01
    0.005672725 = product of:
      0.0226909 = sum of:
        0.0226909 = weight(_text_:research in 3752) [ClassicSimilarity], result of:
          0.0226909 = score(doc=3752,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.15760657 = fieldWeight in 3752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3752)
      0.25 = coord(1/4)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scien-tific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estima-tion based on informetric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergman's size estimate of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the Academic In-visible Web.
  13. Schetsche, M.: ¬Die ergoogelte Wirklichkeit : Verschwörungstheorien und das Internet (2005) 0.01
    0.0051278393 = product of:
      0.020511357 = sum of:
        0.020511357 = product of:
          0.041022714 = sum of:
            0.041022714 = weight(_text_:22 in 3397) [ClassicSimilarity], result of:
              0.041022714 = score(doc=3397,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.23214069 = fieldWeight in 3397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3397)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    "Zweimal täglich googeln" empfiehlt Mathias Bröckers in seinem Buch "Verschwörungen, Verschwörungstheorien und die Geheimnisse des 11.9.". Der Band gilt den gutbürgerlichen Medien von FAZ bis Spiegel als Musterbeispiel krankhafter Verschwörungstheorie. Dabei wollte der Autor - nach eigenem Bekunden - keine Verschwörungstheorie zum 11. September vorlegen, sondern lediglich auf Widersprüche und Fragwürdigkeiten in den amtlichen Darstellungen und Erklärungen der US-Regierung zu jenem Terroranschlag hinweisen. Unabhängig davon, wie ernst diese Einlassungen des Autors zu nehmen sind, ist der "Fall Bröckers" für die Erforschung von Verschwörungstheorien unter zwei Aspekten interessant: Erstens geht der Band auf ein [[extern] ] konspirologisches Tagebuch zurück, das der Autor zwischen dem 13. September 2001 und dem 22. März 2002 für das Online-Magazin Telepolis verfasst hat; zweitens behauptet Bröckers in der Einleitung zum Buch, dass er für seine Arbeit ausschließlich über das Netz zugängliche Quellen genutzt habe. Hierbei hätte ihm Google unverzichtbare Dienste geleistet: Um an die Informationen in diesem Buch zu kommen, musste ich weder über besondere Beziehungen verfügen, noch mich mit Schlapphüten und Turbanträgern zu klandestinen Treffen verabreden - alle Quellen liegen offen. Sie zu finden, leistete mir die Internet-Suchmaschine Google unschätzbare Dienste. Mathias Bröckers
  14. Noerr, P.: ¬The Digital Library Tool Kit (2001) 0.00
    0.00453818 = product of:
      0.01815272 = sum of:
        0.01815272 = weight(_text_:research in 6774) [ClassicSimilarity], result of:
          0.01815272 = score(doc=6774,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.12608525 = fieldWeight in 6774, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.03125 = fieldNorm(doc=6774)
      0.25 = coord(1/4)
    
    Abstract
    This second edition is an update and expansion of the original April 1998 edition. It contains more of everything. In particular, the resources section has been expanded and updated. This document is designed to help those who are contemplating setting up a digital library. Whether this is a first time computerization effort or an extension of an existing library's services, there are questions to be answered, deci-sions to be made, and work to be done. This document covers all those stages and more. The first section (Chapter 1) is a series of questions to ask yourself and your organization. The questions are designed generally to raise issues rather than to provide definitive answers. The second section (Chapters 2-5) discusses the planning and implementation of a digital library. It raises some issues which are specific, and contains information to help answer the specifics and a host of other aspects of a digital li-brary project. The third section (Chapters 6 -7) includes resources and a look at current research, existing digital library systems, and the future. These chapters enable you to find additional resources and help, as well as show you where to look for interesting examples of the current state of the art
  15. Robbio, A. de; Maguolo, D.; Marini, A.: Scientific and general subject classifications in the digital world (2001) 0.00
    0.0038686015 = product of:
      0.015474406 = sum of:
        0.015474406 = weight(_text_:science in 2) [ClassicSimilarity], result of:
          0.015474406 = score(doc=2,freq=2.0), product of:
            0.1329271 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.050463587 = queryNorm
            0.11641272 = fieldWeight in 2, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=2)
      0.25 = coord(1/4)
    
    Abstract
    In the present work we discuss opportunities, problems, tools and techniques encountered when interconnecting discipline-specific subject classifications, primarily organized as search devices in bibliographic databases, with general classifications originally devised for book shelving in public libraries. We first state the fundamental distinction between topical (or subject) classifications and object classifications. Then we trace the structural limitations that have constrained subject classifications since their library origins, and the devices that were used to overcome the gap with genuine knowledge representation. After recalling some general notions on structure, dynamics and interferences of subject classifications and of the objects they refer to, we sketch a synthetic overview on discipline-specific classifications in Mathematics, Computing and Physics, on one hand, and on general classifications on the other. In this setting we present The Scientific Classifications Page, which collects groups of Web pages produced by a pool of software tools for developing hypertextual presentations of single or paired subject classifications from sequential source files, as well as facilities for gathering information from KWIC lists of classification descriptions. Further we propose a concept-oriented methodology for interconnecting subject classifications, with the concrete support of a relational analysis of the whole Mathematics Subject Classification through its evolution since 1959. Finally, we recall a very basic method for interconnection provided by coreference in bibliographic records among index elements from different systems, and point out the advantages of establishing the conditions of a more widespread application of such a method. A part of these contents was presented under the title Mathematics Subject Classification and related Classifications in the Digital World at the Eighth International Conference Crimea 2001, "Libraries and Associations in the Transient World: New Technologies and New Forms of Cooperation", Sudak, Ukraine, June 9-17, 2001, in a special session on electronic libraries, electronic publishing and electronic information in science chaired by Bernd Wegner, Editor-in-Chief of Zentralblatt MATH.
  16. Blosser, J.; Michaelson, R.; Routh. R.; Xia, P.: Defining the landscape of Web resources : Concluding Report of the BAER Web Resources Sub-Group (2000) 0.00
    0.0034185597 = product of:
      0.013674239 = sum of:
        0.013674239 = product of:
          0.027348477 = sum of:
            0.027348477 = weight(_text_:22 in 1447) [ClassicSimilarity], result of:
              0.027348477 = score(doc=1447,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.15476047 = fieldWeight in 1447, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1447)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    21. 4.2002 10:22:31
  17. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.00
    0.0034036348 = product of:
      0.013614539 = sum of:
        0.013614539 = weight(_text_:research in 3391) [ClassicSimilarity], result of:
          0.013614539 = score(doc=3391,freq=2.0), product of:
            0.14397179 = queryWeight, product of:
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.050463587 = queryNorm
            0.09456394 = fieldWeight in 3391, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8529835 = idf(docFreq=6931, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3391)
      0.25 = coord(1/4)
    
    Abstract
    The W3C OWL Web Ontology Language has been a W3C recommendation since 2004, and specification of its successor OWL 2 is being finalised. OWL plays an important role in an increasing number and range of applications and as experience using the language grows, new ideas for further extending its reach continue to be proposed. The OWL: Experiences and Direction (OWLED) workshop series is a forum for practitioners in industry and academia, tool developers, and others interested in OWL to describe real and potential applications, to share experience, and to discuss requirements for language extensions and modifications. The workshop will bring users, implementors and researchers together to measure the state of need against the state of the art, and to set an agenda for research and deployment in order to incorporate OWL-based technologies into new applications. This year's 2009 OWLED workshop will be co-located with the Eighth International Semantic Web Conference (ISWC), and the Third International Conference on Web Reasoning and Rule Systems (RR2009). It will be held in Chantilly, VA, USA on October 23 - 24, 2009. The workshop will concentrate on issues related to the development and W3C standardization of OWL 2, and beyond, but other issues related to OWL are also of interest, particularly those related to the task forces set up at OWLED 2007. As usual, the workshop will try to encourage participants to work together and will give space for discussions on various topics, to be decided and published at some point in the future. We ask participants to have a look at these topics and the accepted submissions before the workshop, and to prepare single "slides" that can be presented during these discussions. There will also be formal presentation of submissions to the workshop.
  18. cis: Nationalbibliothek will das deutsche Internet kopieren (2008) 0.00
    0.0029912398 = product of:
      0.011964959 = sum of:
        0.011964959 = product of:
          0.023929918 = sum of:
            0.023929918 = weight(_text_:22 in 4609) [ClassicSimilarity], result of:
              0.023929918 = score(doc=4609,freq=2.0), product of:
                0.17671488 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050463587 = queryNorm
                0.1354154 = fieldWeight in 4609, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4609)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    24.10.2008 14:19:22