Search (16 results, page 1 of 1)

  • × theme_ss:"Internet"
  • × type_ss:"a"
  • × type_ss:"el"
  1. Bünte, O.: Bundesdatenschutzbeauftragte bezweifelt Facebooks Datenschutzversprechen (2018) 0.00
    0.0031077985 = product of:
      0.021754589 = sum of:
        0.013444485 = weight(_text_:system in 4180) [ClassicSimilarity], result of:
          0.013444485 = score(doc=4180,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.17398985 = fieldWeight in 4180, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4180)
        0.008310104 = product of:
          0.016620208 = sum of:
            0.016620208 = weight(_text_:22 in 4180) [ClassicSimilarity], result of:
              0.016620208 = score(doc=4180,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.19345059 = fieldWeight in 4180, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4180)
          0.5 = coord(1/2)
      0.14285715 = coord(2/14)
    
    Date
    23. 3.2018 13:41:22
    Footnote
    Vgl. zum Hintergrund auch: https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election; https://www.nytimes.com/2018/03/18/us/cambridge-analytica-facebook-privacy-data.html; http://www.latimes.com/business/la-fi-tn-facebook-cambridge-analytica-sued-20180321-story.html; https://www.tagesschau.de/wirtschaft/facebook-cambridge-analytica-103.html; http://www.spiegel.de/netzwelt/web/cambridge-analytica-der-eigentliche-skandal-liegt-im-system-facebook-kolumne-a-1199122.html; http://www.spiegel.de/netzwelt/netzpolitik/cambridge-analytica-facebook-sieht-sich-im-datenskandal-als-opfer-a-1199095.html; https://www.heise.de/newsticker/meldung/Datenskandal-um-Cambridge-Analytica-Facebook-sieht-sich-als-Opfer-3999922.html.
  2. Van de Sompel, H.; Hochstenbach, P.: Reference linking in a hybrid library environment : part 1: frameworks for linking (1999) 0.00
    0.0030459987 = product of:
      0.02132199 = sum of:
        0.010755588 = weight(_text_:system in 1244) [ClassicSimilarity], result of:
          0.010755588 = score(doc=1244,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.13919188 = fieldWeight in 1244, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=1244)
        0.010566402 = weight(_text_:information in 1244) [ClassicSimilarity], result of:
          0.010566402 = score(doc=1244,freq=20.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.2453355 = fieldWeight in 1244, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1244)
      0.14285715 = coord(2/14)
    
    Abstract
    The creation of services linking related information entities is an area that is attracting an ever increasing interest in the ongoing development of the World Wide Web in general, and of research-related information systems in particular. Currently, both practice and theory point at linking services as being a major domain for innovation enabled by digital communication of content. Publishers, subscription agents, researchers and libraries are all looking into ways to create added value by linking related information entities, as such presenting the information within a broader context estimated to be relevant to the users of the information. This is the first of two articles in D-Lib Magazine on this topic. This first part describes the current state-of-the-art and contrasts various approaches to the problem. It identifies static and dynamic linking solutions as well as open and closed linking frameworks. It also includes an extensive bibliography. The second part, SFX, a Generic Linking Solution describes a system that we have developed for linking in a hybrid working environment. The creation of services linking related information entities is an area that is attracting an ever increasing interest in the ongoing development of the World Wide Web in general, and of research-related information systems in particular. Although most writings on electronic scientific communication have touted other benefits, such as the increase in communication speed, the possibility to exchange multimedia content and the absence of limitations on the length of research papers, currently both practice and theory point at linking services as being a major opportunity for improved communication of content. Publishers, subscription agents, researchers and libraries are all looking into ways to create added-value by linking related information entities, as such presenting the information within a broader context estimated to be relevant to the users of the information.
  3. GERHARD : eine Spezialsuchmaschine für die Wissenschaft (1998) 0.00
    0.0021259645 = product of:
      0.029763501 = sum of:
        0.029763501 = weight(_text_:retrieval in 381) [ClassicSimilarity], result of:
          0.029763501 = score(doc=381,freq=2.0), product of:
            0.07421378 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.02453417 = queryNorm
            0.40105087 = fieldWeight in 381, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=381)
      0.071428575 = coord(1/14)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  4. Van de Sompel, H.; Hochstenbach, P.: Reference linking in a hybrid library environment : part 2: SFX, a generic linking solution (1999) 0.00
    9.603204E-4 = product of:
      0.013444485 = sum of:
        0.013444485 = weight(_text_:system in 1241) [ClassicSimilarity], result of:
          0.013444485 = score(doc=1241,freq=2.0), product of:
            0.07727166 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.02453417 = queryNorm
            0.17398985 = fieldWeight in 1241, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1241)
      0.071428575 = coord(1/14)
    
    Abstract
    This is the second part of two articles about reference linking in hybrid digital libraries. The first part, Frameworks for Linking described the current state-of-the-art and contrasted various approaches to the problem. It identified static and dynamic linking solutions, as well as open and closed linking frameworks. It also included an extensive bibliography. The second part describes our work at the University of Ghent to address these issues. SFX is a generic linking system that we have developed for our own needs, but its underlying concepts can be applied in a wide range of digital libraries. This is a description of the approach to the creation of extended services in a hybrid library environment that has been taken by the Library Automation team at the University of Ghent. The ongoing research has been grouped under the working title Special Effects (SFX). In order to explain the SFX-concepts in a comprehensive way, the discussion will start with a brief description of pre-SFX experiments. Thereafter, the basics of the SFX-approach are explained briefly, in combination with concrete implementation choices taken for the Elektron SFX-linking experiment. Elektron was the name of a modest digital library collaboration between the Universities of Ghent, Louvain and Antwerp.
  5. Landwehr, A.: China schafft digitales Punktesystem für den "besseren" Menschen (2018) 0.00
    9.4972615E-4 = product of:
      0.0132961655 = sum of:
        0.0132961655 = product of:
          0.026592331 = sum of:
            0.026592331 = weight(_text_:22 in 4314) [ClassicSimilarity], result of:
              0.026592331 = score(doc=4314,freq=2.0), product of:
                0.085914485 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02453417 = queryNorm
                0.30952093 = fieldWeight in 4314, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4314)
          0.5 = coord(1/2)
      0.071428575 = coord(1/14)
    
    Date
    22. 6.2018 14:29:46
  6. Brooks, T.A.: Where is meaning when form is gone? : Knowledge representation an the Web (2001) 0.00
    7.160121E-4 = product of:
      0.0100241685 = sum of:
        0.0100241685 = weight(_text_:information in 3889) [ClassicSimilarity], result of:
          0.0100241685 = score(doc=3889,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23274569 = fieldWeight in 3889, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=3889)
      0.071428575 = coord(1/14)
    
    Source
    Information Research. 6(2001), no.2
  7. Warnick, W.L.; Leberman, A.; Scott, R.L.; Spence, K.J.; Johnsom, L.A.; Allen, V.S.: Searching the deep Web : directed query engine applications at the Department of Energy (2001) 0.00
    7.160121E-4 = product of:
      0.0100241685 = sum of:
        0.0100241685 = weight(_text_:information in 1215) [ClassicSimilarity], result of:
          0.0100241685 = score(doc=1215,freq=8.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.23274569 = fieldWeight in 1215, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1215)
      0.071428575 = coord(1/14)
    
    Abstract
    Directed Query Engines, an emerging class of search engine specifically designed to access distributed resources on the deep web, offer the opportunity to create inexpensive digital libraries. Already, one such engine, Distributed Explorer, has been used to select and assemble high quality information resources and incorporate them into publicly available systems for the physical sciences. By nesting Directed Query Engines so that one query launches several other engines in a cascading fashion, enormous virtual collections may soon be assembled to form a comprehensive information infrastructure for the physical sciences. Once a Directed Query Engine has been configured for a set of information resources, distributed alerts tools can provide patrons with personalized, profile-based notices of recent additions to any of the selected resources. Due to the potentially enormous size and scope of Directed Query Engine applications, consideration must be given to issues surrounding the representation of large quantities of information from multiple, heterogeneous sources.
  8. Klic, L.; Miller, M.; Nelson, J.K.; Germann, J.E.: Approaching the largest 'API' : extracting information from the Internet with Python (2018) 0.00
    6.2008464E-4 = product of:
      0.008681185 = sum of:
        0.008681185 = weight(_text_:information in 4239) [ClassicSimilarity], result of:
          0.008681185 = score(doc=4239,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.20156369 = fieldWeight in 4239, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4239)
      0.071428575 = coord(1/14)
    
    Abstract
    This article explores the need for libraries to algorithmically access and manipulate the world's largest API: the Internet. The billions of pages on the 'Internet API' (HTTP, HTML, CSS, XPath, DOM, etc.) are easily accessible and manipulable. Libraries can assist in creating meaning through the datafication of information on the world wide web. Because most information is created for human consumption, some programming is required for automated extraction. Python is an easy-to-learn programming language with extensive packages and community support for web page automation. Four packages (Urllib, Selenium, BeautifulSoup, Scrapy) in Python can automate almost any web page for all sized projects. An example warrant data project is explained to illustrate how well Python packages can manipulate web pages to create meaning through assembling custom datasets.
  9. Van de Sompel, H.; Beit-Arie, O.: Generalizing the OpenURL framework beyond references to scholarly works : the Bison-Futé model (2001) 0.00
    5.167373E-4 = product of:
      0.0072343214 = sum of:
        0.0072343214 = weight(_text_:information in 1223) [ClassicSimilarity], result of:
          0.0072343214 = score(doc=1223,freq=6.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.16796975 = fieldWeight in 1223, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1223)
      0.071428575 = coord(1/14)
    
    Abstract
    This paper introduces the Bison-Futé model, a conceptual generalization of the OpenURL framework for open and context-sensitive reference linking in the web-based scholarly information environment. The Bison-Futé model is an abstract framework that identifies and defines components that are required to enable open and context-sensitive linking on the web in general. It is derived from experience gathered from the deployment of the OpenURL framework over the course of the past year. It is a generalization of the current OpenURL framework in several aspects. It aims to extend the scope of open and context-sensitive linking beyond web-based scholarly information. In addition, it offers a generalization of the manner in which referenced items -- as well as the context in which these items are referenced -- can be described for the specific purpose of open and context-sensitive linking. The Bison-Futé model is not suggested as a replacement of the OpenURL framework. On the contrary: it confirms the conceptual foundations of the OpenURL framework and, at the same time, it suggests directions and guidelines as to how the current OpenURL specifications could be extended to become applicable beyond the scholarly information environment.
  10. Kubiszewski, I.; Cleveland, C.J.: ¬The Encyclopedia of Earth (2007) 0.00
    4.176737E-4 = product of:
      0.0058474317 = sum of:
        0.0058474317 = weight(_text_:information in 1170) [ClassicSimilarity], result of:
          0.0058474317 = score(doc=1170,freq=8.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.13576832 = fieldWeight in 1170, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1170)
      0.071428575 = coord(1/14)
    
    Abstract
    The Encyclopedia of Earth (EoE) seeks to become the world's largest and most authoritative electronic source of information about the environments of Earth and their interactions with society. It is a free, fully searchable collection of articles written by scholars, professionals, educators, and experts who collaborate and review each other's work with oversight from an International Advisory Board. The articles are written in non-technical language and are available for free, with no commercial advertising to students, educators, scholars, professionals, decision makers, as well as to the general public. The scope of the Encyclopedia of Earth is the environment of the Earth broadly defined, with particular emphasis on the interaction between society and the natural spheres of the Earth. It will be built on the integrated knowledge from economists to philosophers to span all aspects of the environment. The Encyclopedia is being built bottom-up through the use of a wiki-software that allows users to freely create and edit content. New collaborations, ideas, and entries dynamically evolve in this environment. In this way, the Encyclopedia is a constantly evolving, self-organizing, expert-reviewed, and up-to-date source of environmental information. The motivation behind the Encyclopedia of Earth is simple. Go to GoogleT and type in climate change, pesticides, nuclear power, sustainable development, or any other important environmental issue. Doing so returns millions of results, some fraction of which are authoritative. The remainder is of poor or unknown quality.
    This illustrates a stark reality of the Web. There are many resources for environmental content, but there is no central repository of authoritative information that meets the needs of diverse user communities. The Encyclopedia of Earth aims to fill that niche by providing content that is both free and reliable. Still in its infancy, the EoE already is an integral part of the emerging effort to increase free and open access to trusted information on the Web. It is a trusted content source for authoritative indexes such as the Online Access to Research in the Environment Initiative, the Health InterNetwork Access to Research Initiative, the Open Education Resources Commons, Scirus, DLESE, WiserEarth, among others. Our initial Content Partners include the American Institute of Physics, the University of California Museum of Paleontology, TeacherServe®, the U.S. Geological Survey, the International Arctic Science Committee, the World Wildlife Fund, Conservation International, the Biodiversity Institute of Ontario, and the United Nations Environment Programme, to name just a few. The full partner list here can be found at <http://www.eoearth.org/article/Content_Partners>. We have a diversity of article types including standard subject articles, biographies, place-based entries, country profiles, and environmental classics. We recently launched our E-Book series, full-text, fully searchable books with internal hyperlinks to EoE articles. The eBooks include new releases by distinguished scholars as well as classics such as Walden and On the Origin of Species. Because history can be an important guide to the future, we have added an Environmental Classics section that includes such historical works as Energy from Fossil Fuels by M. King Hubbert and Undersea by Rachel Carson. Our services and features will soon be expanded. The EoE will soon be available in different languages giving a wider range of users access, users will be able to search it geographically or by a well-defined, expert created taxonomy, and teachers will be able to use the EoE to create unique curriculum for their courses.
  11. Lietz, C.: Social-Credit-Scoring : die Informationswissenschaft in der Verantwortung (2018) 0.00
    4.176737E-4 = product of:
      0.0058474317 = sum of:
        0.0058474317 = weight(_text_:information in 4592) [ClassicSimilarity], result of:
          0.0058474317 = score(doc=4592,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.13576832 = fieldWeight in 4592, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4592)
      0.071428575 = coord(1/14)
    
    Abstract
    Von den Informationswissenschaften, aber auch von der allgemeinen Öffentlichkeit weitgehend unbeachtet entwickelt sich zurzeit in China eine neue Art von Bewertungssystem. Social-Credit-Scoring dürfte in Deutschland nur Wenigen ein Begriff sein. Und auch in der Fachliteratur ist hierzu kaum Material zu finden. Einzig diverse internationale Online-Journals, Web-Blogs, wenige TV-Beiträge und die Fachkonferenz re:publica beschäftigen sich intensiver damit, weshalb der Begriff gelegentlich beiläufig in öffentlichen Diskursen fällt. Für die Informationswissenschaften ist dieses Thema hoch relevant. Befasst man sich eingehender damit, so stellt sich einem als Information Professional die Frage, weshalb die Fachgemeinschaft ein Thema mit solch schwerwiegenden Folgen für die Gesellschaft weitestgehend unbeachtet lässt.
  12. Hyning, V. Van; Lintott, C.; Blickhan, S.; Trouille, L.: Transforming libraries and archives through crowdsourcing (2017) 0.00
    3.5800604E-4 = product of:
      0.0050120843 = sum of:
        0.0050120843 = weight(_text_:information in 2526) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=2526,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 2526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2526)
      0.071428575 = coord(1/14)
    
    Theme
    Information Gateway
  13. Gore, E.; Bitta, M.D.; Cohen, D.: ¬The Digital Public Library of America and the National Digital Platform (2017) 0.00
    3.5800604E-4 = product of:
      0.0050120843 = sum of:
        0.0050120843 = weight(_text_:information in 3655) [ClassicSimilarity], result of:
          0.0050120843 = score(doc=3655,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.116372846 = fieldWeight in 3655, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3655)
      0.071428575 = coord(1/14)
    
    Theme
    Information Gateway
  14. Van de Sompel, H.; Hochstenbach, P.: Reference linking in a hybrid library environment : part 3: generalizing the SFX solution in the "SFX@Ghent & SFX@LANL" experiment (1999) 0.00
    3.3753135E-4 = product of:
      0.0047254385 = sum of:
        0.0047254385 = weight(_text_:information in 1243) [ClassicSimilarity], result of:
          0.0047254385 = score(doc=1243,freq=4.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.10971737 = fieldWeight in 1243, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1243)
      0.071428575 = coord(1/14)
    
    Abstract
    This is the third part of our papers about reference linking in a hybrid library environment. The first part described the state-of-the-art of reference linking and contrasted various approaches to the problem. It identified static and dynamic linking solutions, open and closed linking frameworks as well as just-in-case and just-in-time linking. The second part introduced SFX, a dynamic, just-in-time linking solution we built for our own purposes. However, we suggested that the underlying concepts were sufficiently generic to be applied in a wide range of digital libraries. In this third part we show how this has been demonstrated conclusively in the "SFX@Ghent & SFX@LANL" experiment. In this experiment, local as well as remote distributed information resources of the digital library collections of the Research Library of the Los Alamos National Laboratory and the University of Ghent Library have been used as starting points for SFX-links into other parts of the collections. The SFX-framework has further been generalized in order to achieve a technology that can easily be transferred from one digital library environment to another and that minimizes the overhead in making the distributed information services that make up those libraries interoperable with SFX. This third part starts with a presentation of the SFX problem statement in light of the recent discussions on reference linking. Next, it introduces the notion of global and local relevance of extended services as well as an architectural categorization of open linking frameworks, also referred to as frameworks that are supportive of selective resolution. Then, an in-depth description of the generalized SFX solution is given.
  15. Hitchcock, S.; Bergmark, D.; Brody, T.; Gutteridge, C.; Carr, L.; Hall, W.; Lagoze, C.; Harnad, S.: Open citation linking : the way forward (2002) 0.00
    2.9833836E-4 = product of:
      0.004176737 = sum of:
        0.004176737 = weight(_text_:information in 1207) [ClassicSimilarity], result of:
          0.004176737 = score(doc=1207,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.09697737 = fieldWeight in 1207, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1207)
      0.071428575 = coord(1/14)
    
    Abstract
    The speed of scientific communication - the rate of ideas affecting other researchers' ideas - is increasing dramatically. The factor driving this is free, unrestricted access to research papers. Measurements of user activity in mature eprint archives of research papers such as arXiv have shown, for the first time, the degree to which such services support an evolving network of texts commenting on, citing, classifying, abstracting, listing and revising other texts. The Open Citation project has built tools to measure this activity, to build new archives, and has been closely involved with the development of the infrastructure to support open access on which these new services depend. This is the story of the project, intertwined with the concurrent emergence of the Open Archives Initiative (OAI). The paper describes the broad scope of the project's work, showing how it has progressed from early demonstrators of reference linking to produce Citebase, a Web-based citation and impact-ranked search service, and how it has supported the development of the EPrints.org software for building OAI-compliant archives. The work has been underpinned by analysis and experiments on the semantics of documents (digital objects) to determine the features required for formally perfect linking - instantiated as an application programming interface (API) for reference linking - that will enable other applications to build on this work in broader digital library information environments.
  16. Beuth, P.: ¬Das Netz der Welt : Lobos Webciety (2009) 0.00
    1.4916918E-4 = product of:
      0.0020883684 = sum of:
        0.0020883684 = weight(_text_:information in 2136) [ClassicSimilarity], result of:
          0.0020883684 = score(doc=2136,freq=2.0), product of:
            0.04306919 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02453417 = queryNorm
            0.048488684 = fieldWeight in 2136, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.01953125 = fieldNorm(doc=2136)
      0.071428575 = coord(1/14)
    
    Content
    Auch ambitionierte soziale Projekte können gelingen: Refunite.org ist eine Art Suchmaschine, mit der Flüchtlinge weltweit nach vermissten Familienangehörigen suchen können. Lobo nennt als Beispiel die englische Seite fixmystreet.co.uk. Dort tragen Menschen ihre Postleitzahl ein und weisen auf Straßenschäden oder fehlende Schilder hin, oft bebildert mit selbst geschossenen Fotos. Die Eingaben werden an die zuständige Behörde weitergeleitet, damit die weiß, wo sie Schlaglöcher ausbessern muss. Online steht dann nachzulesen, was alles in einem Stadtteil verbessert wurde - und was nicht. "Das ist ein relativ simples Tool, das aber die Fähigkeit des Netzes, Informationen zwischen den Menschen neu zu sortieren, dazu nutzt, die Welt tatsächlich zu verbessern", sagt Lobo. 2009 feiert die Cebit also, dass wir alle online sind. In zehn Jahren wird sie feiern, dass wir das gar nicht mehr merken, glaubt Lobo: "Ich bin überzeugt davon, dass wir noch vernetzter sein werden." Halbautomatische Kommunikation nennt er das. "Dass zum Beispiel mein Handy ständig kommuniziert, wo ich gerade bin und diese Information einem ausgewählten Personenkreis zugängig macht. Dass mein Kalender intelligent wird und meldet, dass ein Freund zur gleichen Zeit in der Stadt ist. Vielleicht schlägt er dann vor: ,Wollt ihr euch da nicht treffen?' Solche Funktionen werden so normal sein, dass man im Prinzip ständig online ist, ohne dass es sich so anfühlt." Teilweise gibt es so etwas schon. Google hat mit "Latitude" gerade einen Ortungsdienst fürs Handy vorgestellt. Die Software sorgt dafür, dass ausgewählten Personen per Google Maps angezeigt wird, wo sich der Handybesitzer gerade aufhält. Der technophile Obama würde den Dienst wahrscheinlich mögen. Doch der Geheimdienst NSA wollte ihm sogar schon den Blackberry wegnehmen - damit der mächtigste Mann der Welt eben nicht ständig geortet werden kann."