Search (100 results, page 2 of 5)

  • × theme_ss:"Internet"
  • × type_ss:"el"
  1. Jacobsen, G.: Webarchiving internationally : interoperability in the future? (2007) 0.01
    0.005549766 = product of:
      0.013874415 = sum of:
        0.009138121 = weight(_text_:a in 699) [ClassicSimilarity], result of:
          0.009138121 = score(doc=699,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1709182 = fieldWeight in 699, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=699)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 699) [ClassicSimilarity], result of:
              0.009472587 = score(doc=699,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 699, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=699)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Several national libraries are collecting parts of the Internet or planning to do so, but in order to render a complete impression of the Internet, web archives must be interoperable, enabling a user to make seamless searches. A questionnaire on this issue was sent to 95 national libraries. The answers show agreement with this goal and that web archiving is becoming more common. Partnering is a key ingredient in moving forward and a useful distinction is suggested in the labels curatorial partners (archives, museums) and technical partners (private companies, universities, other research institutions). Working with private, for-profit companies may also force national libraries to leave room for unorthodox thinking and experimenting. The biggest challenge right now is to make legal deposit, copyright and other legislation adapt to an Internet world, so we can preserve it and make it available to present and future generation.
    Content
    Vortrag anlässlich: WORLD LIBRARY AND INFORMATION CONGRESS: 73RD IFLA GENERAL CONFERENCE AND COUNCIL 19-23 August 2007, Durban, South Africa. - 73 - National Libraries
  2. Danowski, P.: Step one: blow up the silo! : Open bibliographic data, the first step towards Linked Open Data (2010) 0.01
    0.0055105956 = product of:
      0.013776489 = sum of:
        0.007078358 = weight(_text_:a in 3962) [ClassicSimilarity], result of:
          0.007078358 = score(doc=3962,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13239266 = fieldWeight in 3962, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3962)
        0.0066981306 = product of:
          0.013396261 = sum of:
            0.013396261 = weight(_text_:information in 3962) [ClassicSimilarity], result of:
              0.013396261 = score(doc=3962,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.16457605 = fieldWeight in 3962, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3962)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    More and more libraries starting semantic web projects. The question about the license of the data is not discussed or the discussion is deferred to the end of project. In this paper is discussed why the question of the license is so important in context of the semantic web that is should be one of the first aspects in a semantic web project. Also it will be shown why a public domain weaver is the only solution that fulfill the the special requirements of the semantic web and that guaranties the reuseablitly of semantic library data for a sustainability of the projects.
    Content
    Vortrag im Rahmen der Session 93. Cataloguing der WORLD LIBRARY AND INFORMATION CONGRESS: 76TH IFLA GENERAL CONFERENCE AND ASSEMBLY, 10-15 August 2010, Gothenburg, Sweden - 149. Information Technology, Cataloguing, Classification and Indexing with Knowledge Management
  3. Networked knowledge organization systems (2001) 0.01
    0.0054237116 = product of:
      0.013559279 = sum of:
        0.004086692 = weight(_text_:a in 6473) [ClassicSimilarity], result of:
          0.004086692 = score(doc=6473,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 6473, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=6473)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 6473) [ClassicSimilarity], result of:
              0.018945174 = score(doc=6473,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 6473, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6473)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Knowledge Organization Systems can comprise thesauri and other controlled lists of keywords, ontologies, classification systems, clustering approaches, taxonomies, gazetteers, dictionaries, lexical databases, concept maps/spaces, semantic road maps, etc. These schemas enable knowledge structuring and management, knowledge-based data processing and systematic access to knowledge structures in individual collections and digital libraries. Used as interactive information services on the Internet they have an increased potential to support the description, discovery and retrieval of heterogeneous information resources and to contribute to an overall resource discovery infrastructure
    Content
    This issue of the Journal of Digital Information evolved from a workshop on Networked Knowledge Organization Systems (NKOS) held at the Fourth European Conference on Research and Advanced Technology for Digital Libraries (ECDL2000) in Lisbon during September 2000. The focus of the workshop was European NKOS initiatives and projects and options for global cooperation. Workshop organizers were Martin Doerr, Traugott Koch, Dougles Tudhope and Repke de Vries. This group has, with Traugott Koch as the main editor and with the help of Linda Hill, cooperated in the editorial tasks for this special issue
    Source
    Journal of digital information. 1(2001) no.8
  4. Robbio, A. de; Maguolo, D.; Marini, A.: Scientific and general subject classifications in the digital world (2001) 0.01
    0.0054005743 = product of:
      0.0135014355 = sum of:
        0.009036016 = weight(_text_:a in 2) [ClassicSimilarity], result of:
          0.009036016 = score(doc=2,freq=22.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.16900843 = fieldWeight in 2, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2)
        0.0044654203 = product of:
          0.0089308405 = sum of:
            0.0089308405 = weight(_text_:information in 2) [ClassicSimilarity], result of:
              0.0089308405 = score(doc=2,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.10971737 = fieldWeight in 2, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    In the present work we discuss opportunities, problems, tools and techniques encountered when interconnecting discipline-specific subject classifications, primarily organized as search devices in bibliographic databases, with general classifications originally devised for book shelving in public libraries. We first state the fundamental distinction between topical (or subject) classifications and object classifications. Then we trace the structural limitations that have constrained subject classifications since their library origins, and the devices that were used to overcome the gap with genuine knowledge representation. After recalling some general notions on structure, dynamics and interferences of subject classifications and of the objects they refer to, we sketch a synthetic overview on discipline-specific classifications in Mathematics, Computing and Physics, on one hand, and on general classifications on the other. In this setting we present The Scientific Classifications Page, which collects groups of Web pages produced by a pool of software tools for developing hypertextual presentations of single or paired subject classifications from sequential source files, as well as facilities for gathering information from KWIC lists of classification descriptions. Further we propose a concept-oriented methodology for interconnecting subject classifications, with the concrete support of a relational analysis of the whole Mathematics Subject Classification through its evolution since 1959. Finally, we recall a very basic method for interconnection provided by coreference in bibliographic records among index elements from different systems, and point out the advantages of establishing the conditions of a more widespread application of such a method. A part of these contents was presented under the title Mathematics Subject Classification and related Classifications in the Digital World at the Eighth International Conference Crimea 2001, "Libraries and Associations in the Transient World: New Technologies and New Forms of Cooperation", Sudak, Ukraine, June 9-17, 2001, in a special session on electronic libraries, electronic publishing and electronic information in science chaired by Bernd Wegner, Editor-in-Chief of Zentralblatt MATH.
  5. Kubiszewski, I.; Cleveland, C.J.: ¬The Encyclopedia of Earth (2007) 0.01
    0.0052256966 = product of:
      0.013064241 = sum of:
        0.0075385654 = weight(_text_:a in 1170) [ClassicSimilarity], result of:
          0.0075385654 = score(doc=1170,freq=20.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.14100032 = fieldWeight in 1170, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1170)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 1170) [ClassicSimilarity], result of:
              0.011051352 = score(doc=1170,freq=8.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 1170, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1170)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Encyclopedia of Earth (EoE) seeks to become the world's largest and most authoritative electronic source of information about the environments of Earth and their interactions with society. It is a free, fully searchable collection of articles written by scholars, professionals, educators, and experts who collaborate and review each other's work with oversight from an International Advisory Board. The articles are written in non-technical language and are available for free, with no commercial advertising to students, educators, scholars, professionals, decision makers, as well as to the general public. The scope of the Encyclopedia of Earth is the environment of the Earth broadly defined, with particular emphasis on the interaction between society and the natural spheres of the Earth. It will be built on the integrated knowledge from economists to philosophers to span all aspects of the environment. The Encyclopedia is being built bottom-up through the use of a wiki-software that allows users to freely create and edit content. New collaborations, ideas, and entries dynamically evolve in this environment. In this way, the Encyclopedia is a constantly evolving, self-organizing, expert-reviewed, and up-to-date source of environmental information. The motivation behind the Encyclopedia of Earth is simple. Go to GoogleT and type in climate change, pesticides, nuclear power, sustainable development, or any other important environmental issue. Doing so returns millions of results, some fraction of which are authoritative. The remainder is of poor or unknown quality.
    This illustrates a stark reality of the Web. There are many resources for environmental content, but there is no central repository of authoritative information that meets the needs of diverse user communities. The Encyclopedia of Earth aims to fill that niche by providing content that is both free and reliable. Still in its infancy, the EoE already is an integral part of the emerging effort to increase free and open access to trusted information on the Web. It is a trusted content source for authoritative indexes such as the Online Access to Research in the Environment Initiative, the Health InterNetwork Access to Research Initiative, the Open Education Resources Commons, Scirus, DLESE, WiserEarth, among others. Our initial Content Partners include the American Institute of Physics, the University of California Museum of Paleontology, TeacherServe®, the U.S. Geological Survey, the International Arctic Science Committee, the World Wildlife Fund, Conservation International, the Biodiversity Institute of Ontario, and the United Nations Environment Programme, to name just a few. The full partner list here can be found at <http://www.eoearth.org/article/Content_Partners>. We have a diversity of article types including standard subject articles, biographies, place-based entries, country profiles, and environmental classics. We recently launched our E-Book series, full-text, fully searchable books with internal hyperlinks to EoE articles. The eBooks include new releases by distinguished scholars as well as classics such as Walden and On the Origin of Species. Because history can be an important guide to the future, we have added an Environmental Classics section that includes such historical works as Energy from Fossil Fuels by M. King Hubbert and Undersea by Rachel Carson. Our services and features will soon be expanded. The EoE will soon be available in different languages giving a wider range of users access, users will be able to search it geographically or by a well-defined, expert created taxonomy, and teachers will be able to use the EoE to create unique curriculum for their courses.
    Type
    a
  6. Koch, T.; Ardö, A.; Brümmer, A.: ¬The building and maintenance of robot based internet search services : A review of current indexing and data collection methods. Prepared to meet the requirements of Work Package 3 of EU Telematics for Research, project DESIRE. Version D3.11v0.3 (Draft version 3) (1996) 0.01
    0.0050708996 = product of:
      0.012677249 = sum of:
        0.0072082467 = weight(_text_:a in 1669) [ClassicSimilarity], result of:
          0.0072082467 = score(doc=1669,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13482209 = fieldWeight in 1669, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1669)
        0.0054690014 = product of:
          0.010938003 = sum of:
            0.010938003 = weight(_text_:information in 1669) [ClassicSimilarity], result of:
              0.010938003 = score(doc=1669,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1343758 = fieldWeight in 1669, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1669)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    After a short outline of problems, possibilities and difficulties of systematic information retrieval on the Internet and a description of efforts for development in this area, a specification of the terminology for this report is required. Although the process of retrieval is generally seen as an iterative process of browsing and information retrieval and several important services on the net have taken this fact into consideration, the emphasis of this report lays on the general retrieval tools for the whole of Internet. In order to be able to evaluate the differences, possibilities and restrictions of the different services it is necessary to begin with organizing the existing varieties in a typological/ taxonomical survey. The possibilities and weaknesses will be briefly compared and described for the most important services in the categories robot-based WWW-catalogues of different types, list- or form-based catalogues and simultaneous or collected search services respectively. It will however for different reasons not be possible to rank them in order of "best" services. Still more important are the weaknesses and problems common for all attempts of indexing the Internet. The problems of the quality of the input, the technical performance and the general problem of indexing virtual hypertext are shown to be at least as difficult as the different aspects of harvesting, indexing and information retrieval. Some of the attempts made in the area of further development of retrieval services will be mentioned in relation to descriptions of the contents of documents and standardization efforts. Internet harvesting and indexing technology and retrieval software is thoroughly reviewed. Details about all services and software are listed in analytical forms in Annex 1-3.
  7. Schrenk, P.: Gesamtnote 1 für Signal - Telegram-Defizite bei Sicherheit und Privatsphäre : Signal und Telegram im Test (2022) 0.01
    0.0050258166 = product of:
      0.025129084 = sum of:
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 486) [ClassicSimilarity], result of:
              0.050258167 = score(doc=486,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 486, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=486)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 1.2022 14:01:14
  8. Klic, L.; Miller, M.; Nelson, J.K.; Germann, J.E.: Approaching the largest 'API' : extracting information from the Internet with Python (2018) 0.00
    0.0049160775 = product of:
      0.012290194 = sum of:
        0.004086692 = weight(_text_:a in 4239) [ClassicSimilarity], result of:
          0.004086692 = score(doc=4239,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 4239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4239)
        0.008203502 = product of:
          0.016407004 = sum of:
            0.016407004 = weight(_text_:information in 4239) [ClassicSimilarity], result of:
              0.016407004 = score(doc=4239,freq=6.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.20156369 = fieldWeight in 4239, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4239)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article explores the need for libraries to algorithmically access and manipulate the world's largest API: the Internet. The billions of pages on the 'Internet API' (HTTP, HTML, CSS, XPath, DOM, etc.) are easily accessible and manipulable. Libraries can assist in creating meaning through the datafication of information on the world wide web. Because most information is created for human consumption, some programming is required for automated extraction. Python is an easy-to-learn programming language with extensive packages and community support for web page automation. Four packages (Urllib, Selenium, BeautifulSoup, Scrapy) in Python can automate almost any web page for all sized projects. An example warrant data project is explained to illustrate how well Python packages can manipulate web pages to create meaning through assembling custom datasets.
    Type
    a
  9. Alfaro, L.de: How (much) to trust Wikipedia (2008) 0.00
    0.004915534 = product of:
      0.012288835 = sum of:
        0.008341924 = weight(_text_:a in 2138) [ClassicSimilarity], result of:
          0.008341924 = score(doc=2138,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15602624 = fieldWeight in 2138, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2138)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 2138) [ClassicSimilarity], result of:
              0.007893822 = score(doc=2138,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 2138, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2138)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Wikipedia is a collaborative encyclopedia: anyone can contribute to its articles simply by clicking on an "edit'' button. The open nature of the Wikipedia has been key to its success, but has a flip side: if anyone can edit, how can readers know whether to trust its content? To help answer this question, we have developed a reputation system for Wikipedia authors, and a trust system for Wikipedia text. Authors gain reputation when their contributions are long-lived, and they lose reputation when their contributions are undone in short order. Each word in the Wikipedia is assigned a value of trust that depends on the reputation of its author, as well as on the reputation of the authors that subsequently revised the text where the word appears. To validate our algorithms, we show that reputation and trust have good predictive value: higher-reputation authors are more likely to give lasting contributions, and higher-trust text is less likely to be edited. The trust can be visualized via an intuitive coloring of the text background. The coloring provides an effective way of spotting attempts to tamper with Wikipedia information. A trust-colored version of the entire English Wikipedia can be browsed at http://trust.cse.ucsc.edu/
  10. Hyning, V. Van; Lintott, C.; Blickhan, S.; Trouille, L.: Transforming libraries and archives through crowdsourcing (2017) 0.00
    0.004725861 = product of:
      0.011814652 = sum of:
        0.007078358 = weight(_text_:a in 2526) [ClassicSimilarity], result of:
          0.007078358 = score(doc=2526,freq=6.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13239266 = fieldWeight in 2526, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2526)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 2526) [ClassicSimilarity], result of:
              0.009472587 = score(doc=2526,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 2526, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2526)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article will showcase the aims and research goals of the project entitled "Transforming Libraries and Archives through Crowdsourcing", recipient of a 2016 Institute for Museum and Library Services grant. This grant will be used to fund the creation of four bespoke text and audio transcription projects which will be hosted on the Zooniverse, the world-leading research crowdsourcing platform. These transcription projects, while supporting the research of four separate institutions, will also function as a means to expand and enhance the Zooniverse platform to better support galleries, libraries, archives and museums (GLAM institutions) in unlocking their data and engaging the public through crowdsourcing.
    Theme
    Information Gateway
    Type
    a
  11. Van de Sompel, H.; Hochstenbach, P.: Reference linking in a hybrid library environment : part 3: generalizing the SFX solution in the "SFX@Ghent & SFX@LANL" experiment (1999) 0.00
    0.0046694665 = product of:
      0.011673667 = sum of:
        0.0072082467 = weight(_text_:a in 1243) [ClassicSimilarity], result of:
          0.0072082467 = score(doc=1243,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13482209 = fieldWeight in 1243, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1243)
        0.0044654203 = product of:
          0.0089308405 = sum of:
            0.0089308405 = weight(_text_:information in 1243) [ClassicSimilarity], result of:
              0.0089308405 = score(doc=1243,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.10971737 = fieldWeight in 1243, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1243)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This is the third part of our papers about reference linking in a hybrid library environment. The first part described the state-of-the-art of reference linking and contrasted various approaches to the problem. It identified static and dynamic linking solutions, open and closed linking frameworks as well as just-in-case and just-in-time linking. The second part introduced SFX, a dynamic, just-in-time linking solution we built for our own purposes. However, we suggested that the underlying concepts were sufficiently generic to be applied in a wide range of digital libraries. In this third part we show how this has been demonstrated conclusively in the "SFX@Ghent & SFX@LANL" experiment. In this experiment, local as well as remote distributed information resources of the digital library collections of the Research Library of the Los Alamos National Laboratory and the University of Ghent Library have been used as starting points for SFX-links into other parts of the collections. The SFX-framework has further been generalized in order to achieve a technology that can easily be transferred from one digital library environment to another and that minimizes the overhead in making the distributed information services that make up those libraries interoperable with SFX. This third part starts with a presentation of the SFX problem statement in light of the recent discussions on reference linking. Next, it introduces the notion of global and local relevance of extended services as well as an architectural categorization of open linking frameworks, also referred to as frameworks that are supportive of selective resolution. Then, an in-depth description of the generalized SFX solution is given.
    Type
    a
  12. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.00
    0.004602507 = product of:
      0.011506268 = sum of:
        0.009138121 = weight(_text_:a in 3391) [ClassicSimilarity], result of:
          0.009138121 = score(doc=3391,freq=40.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1709182 = fieldWeight in 3391, product of:
              6.3245554 = tf(freq=40.0), with freq of:
                40.0 = termFreq=40.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3391)
        0.0023681468 = product of:
          0.0047362936 = sum of:
            0.0047362936 = weight(_text_:information in 3391) [ClassicSimilarity], result of:
              0.0047362936 = score(doc=3391,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.058186423 = fieldWeight in 3391, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3391)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The W3C OWL Web Ontology Language has been a W3C recommendation since 2004, and specification of its successor OWL 2 is being finalised. OWL plays an important role in an increasing number and range of applications and as experience using the language grows, new ideas for further extending its reach continue to be proposed. The OWL: Experiences and Direction (OWLED) workshop series is a forum for practitioners in industry and academia, tool developers, and others interested in OWL to describe real and potential applications, to share experience, and to discuss requirements for language extensions and modifications. The workshop will bring users, implementors and researchers together to measure the state of need against the state of the art, and to set an agenda for research and deployment in order to incorporate OWL-based technologies into new applications. This year's 2009 OWLED workshop will be co-located with the Eighth International Semantic Web Conference (ISWC), and the Third International Conference on Web Reasoning and Rule Systems (RR2009). It will be held in Chantilly, VA, USA on October 23 - 24, 2009. The workshop will concentrate on issues related to the development and W3C standardization of OWL 2, and beyond, but other issues related to OWL are also of interest, particularly those related to the task forces set up at OWLED 2007. As usual, the workshop will try to encourage participants to work together and will give space for discussions on various topics, to be decided and published at some point in the future. We ask participants to have a look at these topics and the accepted submissions before the workshop, and to prepare single "slides" that can be presented during these discussions. There will also be formal presentation of submissions to the workshop.
    Content
    Long Papers * Suggestions for OWL 3, Pascal Hitzler. * BestMap: Context-Aware SKOS Vocabulary Mappings in OWL 2, Rinke Hoekstra. * Mechanisms for Importing Modules, Bijan Parsia, Ulrike Sattler and Thomas Schneider. * A Syntax for Rules in OWL 2, Birte Glimm, Matthew Horridge, Bijan Parsia and Peter Patel-Schneider. * PelletSpatial: A Hybrid RCC-8 and RDF/OWL Reasoning and Query Engine, Markus Stocker and Evren Sirin. * The OWL API: A Java API for Working with OWL 2 Ontologies, Matthew Horridge and Sean Bechhofer. * From Justifications to Proofs for Entailments in OWL, Matthew Horridge, Bijan Parsia and Ulrike Sattler. * A Solution for the Man-Man Problem in the Family History Knowledge Base, Dmitry Tsarkov, Ulrike Sattler and Robert Stevens. * Towards Integrity Constraints in OWL, Evren Sirin and Jiao Tao. * Processing OWL2 ontologies using Thea: An application of logic programming, Vangelis Vassiliadis, Jan Wielemaker and Chris Mungall. * Reasoning in Metamodeling Enabled Ontologies, Nophadol Jekjantuk, Gerd Gröner and Jeff Z. Pan.
    Short Papers * A Database Backend for OWL, Jörg Henss, Joachim Kleb and Stephan Grimm. * Unifying SysML and OWL, Henson Graves. * The OWLlink Protocol, Thorsten Liebig, Marko Luther and Olaf Noppens. * A Reasoning Broker Framework for OWL, Juergen Bock, Tuvshintur Tserendorj, Yongchun Xu, Jens Wissmann and Stephan Grimm. * Change Representation For OWL 2 Ontologies, Raul Palma, Peter Haase, Oscar Corcho and Asunción Gómez-Pérez. * Practical Aspects of Query Rewriting for OWL 2, Héctor Pérez-Urbina, Ian Horrocks and Boris Motik. * CSage: Use of a Configurable Semantically Attributed Graph Editor as Framework for Editing and Visualization, Lawrence Levin. * A Conformance Test Suite for the OWL 2 RL/RDF Rules Language and the OWL 2 RDF-Based Semantics, Michael Schneider and Kai Mainzer. * Improving the Data Quality of Relational Databases using OBDA and OWL 2 QL, Olivier Cure. * Temporal Classes and OWL, Natalya Keberle. * Using Ontologies for Medical Image Retrieval - An Experiment, Jasmin Opitz, Bijan Parsia and Ulrike Sattler. * Task Representation and Retrieval in an Ontology-Guided Modelling System, Yuan Ren, Jens Lemcke, Andreas Friesen, Tirdad Rahmani, Srdjan Zivkovic, Boris Gregorcic, Andreas Bartho, Yuting Zhao and Jeff Z. Pan. * A platform for reasoning with OWL-EL knowledge bases in a Peer-to-Peer environment, Alexander De Leon and Michel Dumontier. * Axiomé: a Tool for the Elicitation and Management of SWRL Rules, Saeed Hassanpour, Martin O'Connor and Amar Das. * SQWRL: A Query Language for OWL, Martin O'Connor and Amar Das. * Classifying ELH Ontologies In SQL Databases, Vincent Delaitre and Yevgeny Kazakov. * A Semantic Web Approach to Represent and Retrieve Information in a Corporate Memory, Ana B. Rios-Alvarado, R. Carolina Medina-Ramirez and Ricardo Marcelin-Jimenez. * Towards a Graphical Notation for OWL 2, Elisa Kendall, Roy Bell, Roger Burkhart, Mark Dutra and Evan Wallace.
    Demo/Position Papers * Conjunctive Query Answering in Distributed Ontology Systems for Ontologies with Large OWL ABoxes, Xueying Chen and Michel Dumontier. * Node-Link and Containment Methods in Ontology Visualization, Julia Dmitrieva and Fons J. Verbeek. * A JC3IEDM OWL-DL Ontology, Steven Wartik. * Semantically Enabled Temporal Reasoning in a Virtual Observatory, Patrick West, Eric Rozell, Stephan Zednik, Peter Fox and Deborah L. McGuinness. * Developing an Ontology from the Application Up, James Malone, Tomasz Adamusiak, Ele Holloway, Misha Kapushesky and Helen Parkinson.
  13. Schneider, R.: Bibliothek 1.0, 2.0 oder 3.0? (2008) 0.00
    0.0043975897 = product of:
      0.021987949 = sum of:
        0.021987949 = product of:
          0.043975897 = sum of:
            0.043975897 = weight(_text_:22 in 6122) [ClassicSimilarity], result of:
              0.043975897 = score(doc=6122,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2708308 = fieldWeight in 6122, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=6122)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    Noch ist nicht entschieden mit welcher Vehemenz das sogenannte Web 2.0 die Bibliotheken verändern wird. Allerdings wird hier und da bereits mit Bezugnahme auf das sogenannte Semantic Web von einer dritten und mancherorts von einer vierten Generation des Web gesprochen. Der Vortrag hinterfragt kritisch, welche Konzepte sich hinter diesen Bezeichnungen verbergen und geht der Frage nach, welche Herausforderungen eine Übernahme dieser Konzepte für die Bibliothekswelt mit sich bringen würde. Vgl. insbes. Folie 22 mit einer Darstellung von der Entwicklung vom Web 1.0 zum Web 4.0
  14. Dodge, M.: What does the Internet look like, Jellyfish perhaps? : Exploring a visualization of the Internet by Young Hyun of CAIDA (2001) 0.00
    0.004393506 = product of:
      0.010983764 = sum of:
        0.009010308 = weight(_text_:a in 1554) [ClassicSimilarity], result of:
          0.009010308 = score(doc=1554,freq=56.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 1554, product of:
              7.483315 = tf(freq=56.0), with freq of:
                56.0 = termFreq=56.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1554)
        0.0019734555 = product of:
          0.003946911 = sum of:
            0.003946911 = weight(_text_:information in 1554) [ClassicSimilarity], result of:
              0.003946911 = score(doc=1554,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.048488684 = fieldWeight in 1554, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1554)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    "The Internet is often likened to an organic entity and this analogy seems particularly appropriate in the light of some striking new visualizations of the complex mesh of Internet pathways. The images are results of a new graph visualization tool, code-named Walrus, being developed by researcher, Young Hyun, at the Cooperative Association for Internet Data Analysis (CAIDA) [1]. Although Walrus is still in early days of development, I think these preliminary results are some of the most intriguing and evocative images of the Internet's structure that we have seen in last year or two. A few years back I spent an enjoyable afternoon at the Monterey Bay Aquarium and I particularly remember a stunning exhibit of jellyfish, which were illuminated with UV light to show their incredibly delicate organic structures, gently pulsing in tanks of inky black water. Jellyfish are some of the strangest, alien, and yet most beautiful, living creatures [2]. Having looked at the Walrus images I began to wonder, perhaps the backbone networks of the Internet look like jellyfish? The image above is a screengrab of a Walrus visualization of a huge graph. The graph data in this particular example depicts Internet topology, as measured by CAIDA's skitter monitor [3] based in London, showing 535,000-odd Internet nodes and over 600,000 links. The nodes, represented by the yellow dots, are a large sample of computers from across the whole range of Internet addresses. Walrus is an interactive visualization tool that allows the analyst to view massive graphs from any position. The graph is projected inside a 3D sphere using a special kind of space based hyperbolic geometry. This is a non-Euclidean space, which has useful distorting properties of making elements at the center of the display much larger than those on the periphery. You interact with the graph in Walrus by selecting a node of interest, which is smoothly moved into the center of the display, and that region of the graph becomes greatly enlarged, enabling you to focus on the fine detail. Yet the rest of the graph remains visible, providing valuable context of the overall structure. (There are some animations available on the website showing Walrus graphs being moved, which give some sense of what this is like.) Hyperbolic space projection is commonly know as "focus+context" in the field of information visualization and has been used to display all kinds of data that can be represented as large graphs in either two and three dimensions [4]. It can be thought of as a moveable fish-eye lens. The Walrus visualization tool draws much from the hyperbolic research by Tamara Munzner [5] as part of her PhD at Stanford. (Map of the Month examined some of Munzner's work from 1996 in an earlier article, Internet Arcs Around The Globe.) Walrus is being developed as a general-purpose visualization tool able to cope with massive directed graphs, in the order of a million nodes. Providing useful and interactively useable visualization of such large volumes of graph data is a tough challenge and is particularly apposite to the task of mapping of Internet backbone infrastructures. In a recent email Map of the Month asked Walrus developer Young Hyun what had been the hardest part of the project thus far. "The greatest difficulty was in determining precisely what Walrus should be about," said Hyun. Crucially "... we had to face the question of what it means to visualize a large graph. It would defeat the aim of a visualization to overload a user with the large volume of data that is likely to be associated with a large graph." I think the preliminary results available show that Walrus is heading in right direction tackling these challenges.
    However, Hyun points out that it is still early days and over the next six months or so Walrus will be extended to include core functions beyond just visualizing raw topology graphs. For CAIDA, it is important to see performance measurements associated with the links; as Hyun notes, "you can imagine how important this is to our visualizations, given that we are almost never interested in the mere topology of a network." Walrus has not revealed much new scientific knowledge of the Internet thus far, although Hyun commented that the current visualization of topology "did make it easy to see the degree to which the network is in tangles how some nodes form large clusters and how they are seemingly interconnected in random ways." This random connectedness is perhaps what gives the Internet its organic look and feel. Of course this is not real shape of the Internet. One must always be wary when viewing and interpreting any particular graph visualization as much of the final "look and feel" results from subjective decisions of the analyst, rather than inherent in the underlying phenomena. As Hyun pointed out, "... the organic quality of the images derives almost entirely from the particular combination of the layout algorithm used and hyperbolic distortion." There is no inherently "natural" shape when visualizing massive data, such as the topology of the global Internet, in an abstract space. Somewhat like a jellyfish, maybe? ----
    What Is CAIDA? Association for Internet Data Analysis, started in 1997 and is based in the San Diego Supercomputer Center. CAIDA is led by KC Claffy along with a staff of serious Net techie researchers and grad students, and they are one of the worlds leading teams of academic researchers studying how the Internet works [6] . Their mission is "to provide a neutral framework for promoting greater cooperation in developing and deploying Internet measurement, analysis, and visualization tools that will support engineering and maintaining a robust, scaleable global Internet infrastructure." In addition to the Walrus visualization tool and the skitter monitoring system which we have touched on here, CAIDA has many other interesting projects mapping the infrastructure and operations of the global Internet. Two of my particular favorite visualization projects developed at CAIDA are MAPNET and Plankton [7] . MAPNET provides a useful interactive tool for mapping ISP backbones onto real-world geography. You can select from a range of commercial and research backbones and compare their topology of links overlaid on the same map. (The major problem with MAPNET is that is based on static database of ISP backbones links, which has unfortunately become obsolete over time.) Plankton, developed by CAIDA researchers Bradley Huffaker and Jaeyeon Jung, is an interactive tool for visualizing the topology and traffic on the global hierarchy of Web caches.
  15. Lewandowski, D.; Mayr, P.: Exploring the academic invisible Web (2006) 0.00
    0.004303226 = product of:
      0.010758064 = sum of:
        0.0068111527 = weight(_text_:a in 3752) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=3752,freq=8.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 3752, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3752)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 3752) [ClassicSimilarity], result of:
              0.007893822 = score(doc=3752,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 3752, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3752)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose: To provide a critical review of Bergman's 2001 study on the Deep Web. In addition, we bring a new concept into the discussion, the Academic Invisible Web (AIW). We define the Academic Invisible Web as consisting of all databases and collections relevant to academia but not searchable by the general-purpose internet search engines. Indexing this part of the Invisible Web is central to scien-tific search engines. We provide an overview of approaches followed thus far. Design/methodology/approach: Discussion of measures and calculations, estima-tion based on informetric laws. Literature review on approaches for uncovering information from the Invisible Web. Findings: Bergman's size estimate of the Invisible Web is highly questionable. We demonstrate some major errors in the conceptual design of the Bergman paper. A new (raw) size estimate is given. Research limitations/implications: The precision of our estimate is limited due to a small sample size and lack of reliable data. Practical implications: We can show that no single library alone will be able to index the Academic Invisible Web. We suggest collaboration to accomplish this task. Originality/value: Provides library managers and those interested in developing academic search engines with data on the size and attributes of the Academic In-visible Web.
  16. Gore, E.; Bitta, M.D.; Cohen, D.: ¬The Digital Public Library of America and the National Digital Platform (2017) 0.00
    0.0042062993 = product of:
      0.0105157485 = sum of:
        0.005779455 = weight(_text_:a in 3655) [ClassicSimilarity], result of:
          0.005779455 = score(doc=3655,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10809815 = fieldWeight in 3655, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=3655)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 3655) [ClassicSimilarity], result of:
              0.009472587 = score(doc=3655,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 3655, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3655)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Digital Public Library of America brings together the riches of America's libraries, archives, and museums, and makes them freely available to the world. In order to do this, DPLA has had to build elements of the national digital platform to connect to those institutions and to serve their digitized materials to audiences. In this article, we detail the construction of two critical elements of our work: the decentralized national network of "hubs," which operate in states across the country; and a version of the Hydra repository software that is tailored to the needs of our community. This technology and the organizations that make use of it serve as the foundation of the future of DPLA and other projects that seek to take advantage of the national digital platform.
    Theme
    Information Gateway
    Type
    a
  17. Noerr, P.: ¬The Digital Library Tool Kit (2001) 0.00
    0.0041463105 = product of:
      0.010365776 = sum of:
        0.0072082467 = weight(_text_:a in 6774) [ClassicSimilarity], result of:
          0.0072082467 = score(doc=6774,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13482209 = fieldWeight in 6774, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=6774)
        0.003157529 = product of:
          0.006315058 = sum of:
            0.006315058 = weight(_text_:information in 6774) [ClassicSimilarity], result of:
              0.006315058 = score(doc=6774,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0775819 = fieldWeight in 6774, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=6774)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This second edition is an update and expansion of the original April 1998 edition. It contains more of everything. In particular, the resources section has been expanded and updated. This document is designed to help those who are contemplating setting up a digital library. Whether this is a first time computerization effort or an extension of an existing library's services, there are questions to be answered, deci-sions to be made, and work to be done. This document covers all those stages and more. The first section (Chapter 1) is a series of questions to ask yourself and your organization. The questions are designed generally to raise issues rather than to provide definitive answers. The second section (Chapters 2-5) discusses the planning and implementation of a digital library. It raises some issues which are specific, and contains information to help answer the specifics and a host of other aspects of a digital li-brary project. The third section (Chapters 6 -7) includes resources and a look at current research, existing digital library systems, and the future. These chapters enable you to find additional resources and help, as well as show you where to look for interesting examples of the current state of the art
  18. Veelen, I. van: ¬The truth according to Wikipedia (2008) 0.00
    0.0041463105 = product of:
      0.010365776 = sum of:
        0.0072082467 = weight(_text_:a in 2139) [ClassicSimilarity], result of:
          0.0072082467 = score(doc=2139,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13482209 = fieldWeight in 2139, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=2139)
        0.003157529 = product of:
          0.006315058 = sum of:
            0.006315058 = weight(_text_:information in 2139) [ClassicSimilarity], result of:
              0.006315058 = score(doc=2139,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0775819 = fieldWeight in 2139, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2139)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Google or Wikipedia? Those of us who search online -- and who doesn't? -- are getting referred more and more to Wikipedia. For the past two years, this free online "encyclopedia of the people" has been topping the lists of the world's most popular websites. But do we really know what we're using? Backlight plunges into the story behind Wikipedia and explores the wonderful world of Web 2.0. Is it a revolution, or pure hype? Director IJsbrand van Veelen goes looking for the truth behind Wikipedia. Only five people are employed by the company, and all its activities are financed by donations and subsidies. The online encyclopedia that everyone can contribute to and revise is now even bigger than the illustrious Encyclopedia Britannica. Does this spell the end for traditional institutions of knowledge such as Britannica? And should we applaud this development as progress or mourn it as a loss? How reliable is Wikipedia? Do "the people" really hold the lease on wisdom? And since when do we believe that information should be free for all? In this film, "Wikipedians," the folks who spend their days writing and editing articles, explain how the online encyclopedia works. In addition, the parties involved discuss Wikipedia's ethics and quality of content. It quickly becomes clear that there are camps of both believers and critics. Wiki's Truth introduces us to the main players in the debate: Jimmy Wales (founder and head Wikipedian), Larry Sanger (co-founder of Wikipedia, now head of Wiki spin-off Citizendium), Andrew Keen (author of The Cult of the Amateur: How Today's Internet Is Killing Our Culture and Assaulting Our Economy), Phoebe Ayers (a Wikipedian in California), Ndesanjo Macha (Swahili Wikipedia, digital activist), Tim O'Reilly (CEO of O'Reilly Media, the "inventor" of Web 2.0), Charles Leadbeater (philosopher and author of We Think, about crowdsourcing), and Robert McHenry (former editor-in-chief of Encyclopedia Britannica). Opening is a video by Chris Pirillo. The questions surrounding Wikipedia lead to a bigger discussion of Web 2.0, a phenomenon in which the user determines the content. Examples include YouTube, MySpace, Facebook, and Wikipedia. These sites would appear to provide new freedom and opportunities for undiscovered talent and unheard voices, but just where does the boundary lie between expert and amateur? Who will survive according to the laws of this new "digital Darwinism"? Are equality and truth really reconcilable ideals? And most importantly, has the Internet brought us wisdom and truth, or is it high time for a cultural counterrevolution?
  19. Lietz, C.: Social-Credit-Scoring : die Informationswissenschaft in der Verantwortung (2018) 0.00
    0.0041173934 = product of:
      0.010293484 = sum of:
        0.004767807 = weight(_text_:a in 4592) [ClassicSimilarity], result of:
          0.004767807 = score(doc=4592,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.089176424 = fieldWeight in 4592, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4592)
        0.005525676 = product of:
          0.011051352 = sum of:
            0.011051352 = weight(_text_:information in 4592) [ClassicSimilarity], result of:
              0.011051352 = score(doc=4592,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13576832 = fieldWeight in 4592, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4592)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Von den Informationswissenschaften, aber auch von der allgemeinen Öffentlichkeit weitgehend unbeachtet entwickelt sich zurzeit in China eine neue Art von Bewertungssystem. Social-Credit-Scoring dürfte in Deutschland nur Wenigen ein Begriff sein. Und auch in der Fachliteratur ist hierzu kaum Material zu finden. Einzig diverse internationale Online-Journals, Web-Blogs, wenige TV-Beiträge und die Fachkonferenz re:publica beschäftigen sich intensiver damit, weshalb der Begriff gelegentlich beiläufig in öffentlichen Diskursen fällt. Für die Informationswissenschaften ist dieses Thema hoch relevant. Befasst man sich eingehender damit, so stellt sich einem als Information Professional die Frage, weshalb die Fachgemeinschaft ein Thema mit solch schwerwiegenden Folgen für die Gesellschaft weitestgehend unbeachtet lässt.
    Type
    a
  20. Schetsche, M.: ¬Die ergoogelte Wirklichkeit : Verschwörungstheorien und das Internet (2005) 0.00
    0.0037693623 = product of:
      0.018846812 = sum of:
        0.018846812 = product of:
          0.037693623 = sum of:
            0.037693623 = weight(_text_:22 in 3397) [ClassicSimilarity], result of:
              0.037693623 = score(doc=3397,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23214069 = fieldWeight in 3397, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3397)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    "Zweimal täglich googeln" empfiehlt Mathias Bröckers in seinem Buch "Verschwörungen, Verschwörungstheorien und die Geheimnisse des 11.9.". Der Band gilt den gutbürgerlichen Medien von FAZ bis Spiegel als Musterbeispiel krankhafter Verschwörungstheorie. Dabei wollte der Autor - nach eigenem Bekunden - keine Verschwörungstheorie zum 11. September vorlegen, sondern lediglich auf Widersprüche und Fragwürdigkeiten in den amtlichen Darstellungen und Erklärungen der US-Regierung zu jenem Terroranschlag hinweisen. Unabhängig davon, wie ernst diese Einlassungen des Autors zu nehmen sind, ist der "Fall Bröckers" für die Erforschung von Verschwörungstheorien unter zwei Aspekten interessant: Erstens geht der Band auf ein [[extern] ] konspirologisches Tagebuch zurück, das der Autor zwischen dem 13. September 2001 und dem 22. März 2002 für das Online-Magazin Telepolis verfasst hat; zweitens behauptet Bröckers in der Einleitung zum Buch, dass er für seine Arbeit ausschließlich über das Netz zugängliche Quellen genutzt habe. Hierbei hätte ihm Google unverzichtbare Dienste geleistet: Um an die Informationen in diesem Buch zu kommen, musste ich weder über besondere Beziehungen verfügen, noch mich mit Schlapphüten und Turbanträgern zu klandestinen Treffen verabreden - alle Quellen liegen offen. Sie zu finden, leistete mir die Internet-Suchmaschine Google unschätzbare Dienste. Mathias Bröckers

Years

Languages

  • e 60
  • d 38
  • el 1
  • More… Less…

Types

  • a 39
  • s 3
  • i 2
  • r 2
  • b 1
  • m 1
  • x 1
  • More… Less…

Classifications