Search (173 results, page 9 of 9)

  • × language_ss:"e"
  • × type_ss:"el"
  1. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.00
    0.0039227554 = product of:
      0.015691021 = sum of:
        0.015691021 = product of:
          0.031382043 = sum of:
            0.031382043 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.031382043 = score(doc=4550,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    10.11.2018 16:27:22
  2. Monireh, E.; Sarker, M.K.; Bianchi, F.; Hitzler, P.; Doran, D.; Xie, N.: Reasoning over RDF knowledge bases using deep learning (2018) 0.00
    0.0039227554 = product of:
      0.015691021 = sum of:
        0.015691021 = product of:
          0.031382043 = sum of:
            0.031382043 = weight(_text_:22 in 4553) [ClassicSimilarity], result of:
              0.031382043 = score(doc=4553,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.19345059 = fieldWeight in 4553, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4553)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    16.11.2018 14:22:01
  3. Maurer, H.; Balke, T.; Kappe,, F.; Kulathuramaiyer, N.; Weber, S.; Zaka, B.: Report on dangers and opportunities posed by large search engines, particularly Google (2007) 0.00
    0.0039210645 = product of:
      0.015684258 = sum of:
        0.015684258 = product of:
          0.031368516 = sum of:
            0.031368516 = weight(_text_:aspects in 754) [ClassicSimilarity], result of:
              0.031368516 = score(doc=754,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.14981388 = fieldWeight in 754, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=754)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    We believe that the importance has shifted considerably since the approval of the project. We thus will emphasize some aspects much more than ever planned, and treat others in a shorter fashion. We believe and hope that this is also seen as unexpected benefit by BMVIT. This report is structured as follows: After an Executive Summary that will highlight why the topic is of such paramount importance we explain in an introduction possible optimal ways how to study the report and its appendices. We can report with some pride that many of the ideas have been accepted by the international scene at conferences and by journals as of such crucial importance that a number of papers (constituting the appendices and elaborating the various sections) have been considered high quality material for publication. We want to thank the Austrian Federal Ministry of Transport, Innovation and Technology (BMVIT) for making this study possible. We would be delighted if the study can be distributed widely to European decision makers, as some of the issues involved do indeed involve all of Europe, if not the world.
  4. Thaller, M.: From the digitized to the digital library (2001) 0.00
    0.0039210645 = product of:
      0.015684258 = sum of:
        0.015684258 = product of:
          0.031368516 = sum of:
            0.031368516 = weight(_text_:aspects in 1159) [ClassicSimilarity], result of:
              0.031368516 = score(doc=1159,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.14981388 = fieldWeight in 1159, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1159)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    The author holds a chair in Humanities Computer Science at the University of Cologne. For a number of years, he has been responsible for digitization projects, either as project director or as the person responsible for the technology being employed on the projects. The "Duderstadt project" (http://www.archive.geschichte.mpg.de/duderstadt/dud-e.htm) is one such project. It is one of the early large-scale manuscript servers, finished at the end of 1998, with approximately 80,000 high resolution documents representing the holdings of a city archive before the year 1600. The digital library of the Max-Planck-Institut für Europäische Rechtsgeschichte in Frankfurt (http://www.mpier.uni-frankfurt.de/dlib) is another project on which the author has worked, with currently approximately 900,000 pages. The author is currently project director of the project "Codices Electronici Ecclesiae Colonensis" (CEEC), which has just started and will ultimately consist of approximately 130,000 very high resolution color pages representing the complete holdings of the manuscript library of a medieval cathedral. It is being designed in close cooperation with the user community of such material. The project site (http://www.ceec.uni-koeln.de), while not yet officially opened, currently holds about 5,000 pages and is growing by 100 - 150 pages per day. Parallel to the CEEC model project, a conceptual project, the "Codex Electronicus Colonensis" (CEC), is at work on the definition of an abstract model for the representation of medieval codices in digital form. The following paper has grown out of the design considerations for the mentioned CEC project. The paper reflects a growing concern of the author's that some of the recent advances in digital (research) libraries are being diluted because it is not clear whether the advances really reach the audience for whom the projects would be most useful. Many, if not most, digitization projects have aimed at existing collections as individual servers. A digital library, however, should be more than a digitized one. It should be built according to principles that are not necessarily the same as those employed for paper collections, and it should be evaluated according to different measures which are not yet totally clear. The paper takes the form of six theses on various aspects of the ongoing transition to digital libraries. These theses have been presented at a forum on the German "retrodigitization" program. The program aims at the systematic conversion of library resources into digital form, concentrates for a number of reasons on material primarily of interest to the Humanities, and is funded by the German research council. As such this program is directly aimed at improving the overall infrastructure of academic research; other users of libraries are of interest, but are not central to the program.
  5. Choudhury, G.S.; DiLauro, T.; Droettboom, M.; Fujinaga, I.; MacMillan, K.: Strike up the score : deriving searchable and playable digital formats from sheet music (2001) 0.00
    0.0039210645 = product of:
      0.015684258 = sum of:
        0.015684258 = product of:
          0.031368516 = sum of:
            0.031368516 = weight(_text_:aspects in 1220) [ClassicSimilarity], result of:
              0.031368516 = score(doc=1220,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.14981388 = fieldWeight in 1220, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1220)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In the final report to NEH, the Curator of Special Collections at the MSEL stated, "the most useful thing we learned from this project was that you can never overestimate the amount of time it will take to create a quality digital product" (Requardt 1998). The word "resources" might represent a more comprehensive choice than the word "time" in this previous statement. This "sink" of time and resources manifested itself by an increasing allocation of human labor and time to deal with workflow issues related to large-scale digitization. The Levy Collection experience provides ample evidence that there will be mistakes during and after digitization and that unforeseen challenges or difficulties will arise, especially when dealing with rare or fragile materials. The current strategy of allocating additional human labor neither limits costs nor scales well. Consequently, the Digital Knowledge Center (DKC) of the Milton S. Eisenhower Library sought and secured funding for the development of a workflow management system through the National Science Foundation's (NSF) Digital Libraries Initiative, Phase 2 and the Institute for Museum and Library Services (IMLS)6 National Leadership Grant Program. The Levy family and a technology entrepreneur in Maryland provided additional funding for other aspects of the project. The mission of this second phase of the Levy project ("Levy II") can be summarized as follows: * Reduce costs for large collection ingestion by creating a suite of open-source processes, tools and interfaces for workflow management * Increase access capabilities by providing a suite of research tools * Demonstrate utility of tools and processes with a subset of the online Levy Collection The cornerstones of the workflow management system include: optical music recognition (OMR) software to generate a logical representation of the score -- for sound generation, musical searching, and musicological research -- and an automated name authority control system to disambiguate names (e.g., the authors Mark Twain and Samuel Clemens are the same individual). The research tools focus upon enhanced searching capabilities through the development and application of a fast, disk-based search engine for lyrics and music, and the incorporation of an XML structure for metadata. Though this paper focuses on the OMR component of our work, a companion paper to be published in a future issue of D-Lib will describe more fully the other tools (e.g., the automated name authority control system and the disk-based search engine), the overall workflow management system, and the project management process.
  6. Dolin, R.; Agrawal, D.; El Abbadi, A.; Pearlman, J.: Using automated classification for summarizing and selecting heterogeneous information sources (1998) 0.00
    0.0039210645 = product of:
      0.015684258 = sum of:
        0.015684258 = product of:
          0.031368516 = sum of:
            0.031368516 = weight(_text_:aspects in 1253) [ClassicSimilarity], result of:
              0.031368516 = score(doc=1253,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.14981388 = fieldWeight in 1253, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1253)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Information retrieval over the Internet increasingly requires the filtering of thousands of heterogeneous information sources. Important sources of information include not only traditional databases with structured data and queries, but also increasing numbers of non-traditional, semi- or unstructured collections such as Web sites, FTP archives, etc. As the number and variability of sources increases, new ways of automatically summarizing, discovering, and selecting collections relevant to a user's query are needed. One such method involves the use of classification schemes, such as the Library of Congress Classification (LCC), within which a collection may be represented based on its content, irrespective of the structure of the actual data or documents. For such a system to be useful in a large-scale distributed environment, it must be easy to use for both collection managers and users. As a result, it must be possible to classify documents automatically within a classification scheme. Furthermore, there must be a straightforward and intuitive interface with which the user may use the scheme to assist in information retrieval (IR). Our work with the Alexandria Digital Library (ADL) Project focuses on geo-referenced information, whether text, maps, aerial photographs, or satellite images. As a result, we have emphasized techniques which work with both text and non-text, such as combined textual and graphical queries, multi-dimensional indexing, and IR methods which are not solely dependent on words or phrases. Part of this work involves locating relevant online sources of information. In particular, we have designed and are currently testing aspects of an architecture, Pharos, which we believe will scale up to 1.000.000 heterogeneous sources. Pharos accommodates heterogeneity in content and format, both among multiple sources as well as within a single source. That is, we consider sources to include Web sites, FTP archives, newsgroups, and full digital libraries; all of these systems can include a wide variety of content and multimedia data formats. Pharos is based on the use of hierarchical classification schemes. These include not only well-known 'subject' (or 'concept') based schemes such as the Dewey Decimal System and the LCC, but also, for example, geographic classifications, which might be constructed as layers of smaller and smaller hierarchical longitude/latitude boxes. Pharos is designed to work with sophisticated queries which utilize subjects, geographical locations, temporal specifications, and other types of information domains. The Pharos architecture requires that hierarchically structured collection metadata be extracted so that it can be partitioned in such a way as to greatly enhance scalability. Automated classification is important to Pharos because it allows information sources to extract the requisite collection metadata automatically that must be distributed.
  7. OWLED 2009; OWL: Experiences and Directions, Sixth International Workshop, Chantilly, Virginia, USA, 23-24 October 2009, Co-located with ISWC 2009. (2009) 0.00
    0.0039210645 = product of:
      0.015684258 = sum of:
        0.015684258 = product of:
          0.031368516 = sum of:
            0.031368516 = weight(_text_:aspects in 3391) [ClassicSimilarity], result of:
              0.031368516 = score(doc=3391,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.14981388 = fieldWeight in 3391, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=3391)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Content
    Short Papers * A Database Backend for OWL, Jörg Henss, Joachim Kleb and Stephan Grimm. * Unifying SysML and OWL, Henson Graves. * The OWLlink Protocol, Thorsten Liebig, Marko Luther and Olaf Noppens. * A Reasoning Broker Framework for OWL, Juergen Bock, Tuvshintur Tserendorj, Yongchun Xu, Jens Wissmann and Stephan Grimm. * Change Representation For OWL 2 Ontologies, Raul Palma, Peter Haase, Oscar Corcho and Asunción Gómez-Pérez. * Practical Aspects of Query Rewriting for OWL 2, Héctor Pérez-Urbina, Ian Horrocks and Boris Motik. * CSage: Use of a Configurable Semantically Attributed Graph Editor as Framework for Editing and Visualization, Lawrence Levin. * A Conformance Test Suite for the OWL 2 RL/RDF Rules Language and the OWL 2 RDF-Based Semantics, Michael Schneider and Kai Mainzer. * Improving the Data Quality of Relational Databases using OBDA and OWL 2 QL, Olivier Cure. * Temporal Classes and OWL, Natalya Keberle. * Using Ontologies for Medical Image Retrieval - An Experiment, Jasmin Opitz, Bijan Parsia and Ulrike Sattler. * Task Representation and Retrieval in an Ontology-Guided Modelling System, Yuan Ren, Jens Lemcke, Andreas Friesen, Tirdad Rahmani, Srdjan Zivkovic, Boris Gregorcic, Andreas Bartho, Yuting Zhao and Jeff Z. Pan. * A platform for reasoning with OWL-EL knowledge bases in a Peer-to-Peer environment, Alexander De Leon and Michel Dumontier. * Axiomé: a Tool for the Elicitation and Management of SWRL Rules, Saeed Hassanpour, Martin O'Connor and Amar Das. * SQWRL: A Query Language for OWL, Martin O'Connor and Amar Das. * Classifying ELH Ontologies In SQL Databases, Vincent Delaitre and Yevgeny Kazakov. * A Semantic Web Approach to Represent and Retrieve Information in a Corporate Memory, Ana B. Rios-Alvarado, R. Carolina Medina-Ramirez and Ricardo Marcelin-Jimenez. * Towards a Graphical Notation for OWL 2, Elisa Kendall, Roy Bell, Roger Burkhart, Mark Dutra and Evan Wallace.
  8. Vocht, L. De: Exploring semantic relationships in the Web of Data : Semantische relaties verkennen in data op het web (2017) 0.00
    0.0032675539 = product of:
      0.0130702155 = sum of:
        0.0130702155 = product of:
          0.026140431 = sum of:
            0.026140431 = weight(_text_:aspects in 4232) [ClassicSimilarity], result of:
              0.026140431 = score(doc=4232,freq=2.0), product of:
                0.20938325 = queryWeight, product of:
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.046325076 = queryNorm
                0.1248449 = fieldWeight in 4232, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.5198684 = idf(docFreq=1308, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=4232)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    When we speak about finding relationships between resources, it is necessary to dive deeper in the structure. The graph structure of linked data where the semantics give meaning to the relationships between resources enable the execution of pathfinding algorithms. The assigned weights and heuristics are base components of such algorithms and ultimately define (the order) which resources are included in a path. These paths explain indirect connections between resources. Our third technique proposes an algorithm that optimizes the choice of resources in terms of serendipity. Some optimizations guard the consistence of candidate-paths where the coherence of consecutive connections is maximized to avoid trivial and too arbitrary paths. The implementation uses the A* algorithm, the de-facto reference when it comes to heuristically optimized minimal cost paths. The effectiveness of paths was measured based on common automatic metrics and surveys where the users could indicate their preference for paths, generated each time in a different way. Finally, all our techniques are applied to a use case about publications in digital libraries where they are aligned with information about scientific conferences and researchers. The application to this use case is a practical example because the different aspects of exploratory search come together. In fact, the techniques also evolved from the experiences when implementing the use case. Practical details about the semantic model are explained and the implementation of the search system is clarified module by module. The evaluation positions the result, a prototype of a tool to explore scientific publications, researchers and conferences next to some important alternatives.
  9. Knutsen, U.: Working in a distributed electronic environment : Experiences with the Norwegian edition (2003) 0.00
    0.003138204 = product of:
      0.012552816 = sum of:
        0.012552816 = product of:
          0.025105633 = sum of:
            0.025105633 = weight(_text_:22 in 1937) [ClassicSimilarity], result of:
              0.025105633 = score(doc=1937,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.15476047 = fieldWeight in 1937, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1937)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Object
    DDC-22
  10. Encyclopædia Britannica 2003 : Ultmate Reference Suite (2002) 0.00
    0.003138204 = product of:
      0.012552816 = sum of:
        0.012552816 = product of:
          0.025105633 = sum of:
            0.025105633 = weight(_text_:22 in 2182) [ClassicSimilarity], result of:
              0.025105633 = score(doc=2182,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.15476047 = fieldWeight in 2182, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2182)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: c't 2002, H.23, S.229 (T.J. Schult): "Mac-Anwender hatten bisher keine große Auswahl bei Multimedia-Enzyklopädien: entweder ein grottenschlechtes Kosmos Kompaktwissen, das dieses Jahr letztmalig erscheinen soll und sich dabei als Systhema Universallexikon tarnt. Oder ein Brockhaus in Text und Bild mit exzellenten Texten, aber flauer Medienausstattung. Die von Acclaim in Deutschland vertriebenen Britannica-Enzyklopädien stellen eine ausgezeichnete Alternative für den des Englischen Kundigen dar. Während früher nur Einfach-Britannicas auf dem Mac liefen, gilt dies nun für alle drei Versionen Student, Deluxe und Ultimate Reference Suite. Die Suite enthält dabei nicht nur alle 75 000 Artikel der 32 Britannica-Bände, sondern auch die 15 000 der Student Encyclopaedia, eines eigenen Schülerlexikons, das durch sein einfaches Englisch gerade für Nicht-Muttersprachler als Einstieg taugt. Wer es noch elementarer haben möchte, klickt sich zur Britannica Elementary Encyclopaedia, welche unter der gleichen Oberfläche wie die anderen Werke zugänglich ist. Schließlich umfasst die Suite einen Weltatlas sowie einsprachige Wörterbücher und Thesauri von Merriam-Webster in der Collegiate- und Student-Ausbaustufe mit allein 555 000 Definitionen, Synonymen und Antonymen. Wer viel in englischer Sprache recherchiert oder gar schreibt, leckt sich angesichts dieses Angebots (EUR 99,95) die Finger, zumal die Printausgabe gut 1600 Euro kostet. Die Texte sind einfach kolossal - allein das Inhaltsverzeichnis des Artikels Germany füllt sieben Bildschirmseiten. Schon die Inhalte aus den BritannicaBänden bieten mehr als doppelt so viel Text wie die rund tausend Euro kostende Brockhaus Enzyklopädie digital (c't 22/02, S. 38). Allein die 220 000 thematisch einsortierten Web-Links sind das Geld wert. Wer die 2,4 Gigabyte belegende Komplettinstallation wählt, muss sogar nie mehr die DVD (alternativ vier CD-ROMs) einlegen. Dieses Jahr muss sich niemand mehr mit dem Britannica-typischen Kuddelmuddel aus Lexikonartikeln und vielen, vielen Jahrbüchern herumschlagen - außer dem Basistext der drei Enzyklopädien sind 'nur' die zwei Jahrbücher 2001 und 2002 getrennt aufgeführt. Wer des Englischen mächtig ist, mag hier die gute Gelegenheit zum Kauf nutzen."
  11. Bradford, R.B.: Relationship discovery in large text collections using Latent Semantic Indexing (2006) 0.00
    0.003138204 = product of:
      0.012552816 = sum of:
        0.012552816 = product of:
          0.025105633 = sum of:
            0.025105633 = weight(_text_:22 in 1163) [ClassicSimilarity], result of:
              0.025105633 = score(doc=1163,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.15476047 = fieldWeight in 1163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1163)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Proceedings of the Fourth Workshop on Link Analysis, Counterterrorism, and Security, SIAM Data Mining Conference, Bethesda, MD, 20-22 April, 2006. [http://www.siam.org/meetings/sdm06/workproceed/Link%20Analysis/15.pdf]
  12. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.00
    0.003138204 = product of:
      0.012552816 = sum of:
        0.012552816 = product of:
          0.025105633 = sum of:
            0.025105633 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
              0.025105633 = score(doc=3608,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.15476047 = fieldWeight in 3608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3608)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
  13. Lavoie, B.; Connaway, L.S.; Dempsey, L.: Anatomy of aggregate collections : the example of Google print for libraries (2005) 0.00
    0.002353653 = product of:
      0.009414612 = sum of:
        0.009414612 = product of:
          0.018829225 = sum of:
            0.018829225 = weight(_text_:22 in 1184) [ClassicSimilarity], result of:
              0.018829225 = score(doc=1184,freq=2.0), product of:
                0.16222252 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046325076 = queryNorm
                0.116070345 = fieldWeight in 1184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1184)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    26.12.2011 14:08:22

Years

Types

  • a 82
  • s 4
  • x 4
  • r 3
  • m 2
  • i 1
  • n 1
  • More… Less…