Search (16 results, page 1 of 1)

  • × theme_ss:"Elektronisches Publizieren"
  • × type_ss:"el"
  1. Dobratz, S.; Neuroth, H.: nestor: Network of Expertise in long-term STOrage of digital Resources : a digital preservation initiative for Germany (2004) 0.03
    0.032388795 = product of:
      0.06477759 = sum of:
        0.0072827823 = weight(_text_:information in 1195) [ClassicSimilarity], result of:
          0.0072827823 = score(doc=1195,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.08228803 = fieldWeight in 1195, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1195)
        0.057494808 = weight(_text_:standards in 1195) [ClassicSimilarity], result of:
          0.057494808 = score(doc=1195,freq=6.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.25587338 = fieldWeight in 1195, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1195)
      0.5 = coord(2/4)
    
    Abstract
    Sponsored by the German Ministry of Education and Research with funding of 800.000 EURO, the German Network of Expertise in long-term storage of digital resources (nestor) began in June 2003 as a cooperative effort of 6 partners representing different players within the field of long-term preservation. The partners include: * The German National Library (Die Deutsche Bibliothek) as the lead institution for the project * The State and University Library of Lower Saxony Göttingen (Staats- und Universitätsbibliothek Göttingen) * The Computer and Media Service and the University Library of Humboldt-University Berlin (Humboldt-Universität zu Berlin) * The Bavarian State Library in Munich (Bayerische Staatsbibliothek) * The Institute for Museum Information in Berlin (Institut für Museumskunde) * General Directorate of the Bavarian State Archives (GDAB) As in other countries, long-term preservation of digital resources has become an important issue in Germany in recent years. Nevertheless, coming to agreement with institutions throughout the country to cooperate on tasks for a long-term preservation effort has taken a great deal of effort. Although there had been considerable attention paid to the preservation of physical media like CD-ROMS, technologies available for the long-term preservation of digital publications like e-books, digital dissertations, websites, etc., are still lacking. Considering the importance of the task within the federal structure of Germany, with the responsibility of each federal state for its science and culture activities, it is obvious that the approach to a successful solution of these issues in Germany must be a cooperative approach. Since 2000, there have been discussions about strategies and techniques for long-term archiving of digital information, particularly within the distributed structure of Germany's library and archival institutions. A key part of all the previous activities was focusing on using existing standards and analyzing the context in which those standards would be applied. One such activity, the Digital Library Forum Planning Project, was done on behalf of the German Ministry of Education and Research in 2002, where the vision of a digital library in 2010 that can meet the changing and increasing needs of users was developed and described in detail, including the infrastructure required and how the digital library would work technically, what it would contain and how it would be organized. The outcome was a strategic plan for certain selected specialist areas, where, amongst other topics, a future call for action for long-term preservation was defined, described and explained against the background of practical experience.
    As follow up, in 2002 the nestor long-term archiving working group provided an initial spark towards planning and organising coordinated activities concerning the long-term preservation and long-term availability of digital documents in Germany. This resulted in a workshop, held 29 - 30 October 2002, where major tasks were discussed. Influenced by the demands and progress of the nestor network, the participants reached agreement to start work on application-oriented projects and to address the following topics: * Overlapping problems o Collection and preservation of digital objects (selection criteria, preservation policy) o Definition of criteria for trusted repositories o Creation of models of cooperation, etc. * Digital objects production process o Analysis of potential conflicts between production and long-term preservation o Documentation of existing document models and recommendations for standards models to be used for long-term preservation o Identification systems for digital objects, etc. * Transfer of digital objects o Object data and metadata o Transfer protocols and interoperability o Handling of different document types, e.g. dynamic publications, etc. * Long-term preservation of digital objects o Design and prototype implementation of depot systems for digital objects (OAIS was chosen to be the best functional model.) o Authenticity o Functional requirements on user interfaces of an depot system o Identification systems for digital objects, etc. At the end of the workshop, participants decided to establish a permanent distributed infrastructure for long-term preservation and long-term accessibility of digital resources in Germany comparable, e.g., to the Digital Preservation Coalition in the UK. The initial phase, nestor, is now being set up by the above-mentioned 3-year funding project.
  2. Pinfield, S.: How do physicists use an e-print archive? : implications for institutional e-print services (2001) 0.01
    0.0138311 = product of:
      0.0553244 = sum of:
        0.0553244 = weight(_text_:standards in 1226) [ClassicSimilarity], result of:
          0.0553244 = score(doc=1226,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.24621427 = fieldWeight in 1226, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1226)
      0.25 = coord(1/4)
    
    Abstract
    It has been suggested that institutional e-print services will become an important way of achieving the wide availability of e-prints across a broad range of subject disciplines. However, as yet there are few exemplars of this sort of service. This paper describes how physicists make use of an established centralized subject-based e-prints service, arXiv (formerly known as the Los Alamos XXX service), and discusses the possible implications of this use for institutional multidisciplinary e-print archives. A number of key points are identified, including technical issues (such as file formats and user interface design), management issues (such as submission procedures and administrative staff support), economic issues (such as installation and support costs), quality issues (such as peer review and quality control criteria), policy issues (such as digital preservation and collection development standards), academic issues (such as scholarly communication cultures and publishing trends), and legal issues (such as copyright and intellectual property rights). These are discussed with reference to the project to set up a pilot institutional e-print service at the University of Nottingham, UK. This project is being used as a pragmatic way of investigating the issues surrounding institutional e-print services, particularly in seeing how flexible the e-prints model actually is and how easily it can adapt itself to disciplines other than physics.
  3. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.01
    0.01207495 = product of:
      0.0482998 = sum of:
        0.0482998 = product of:
          0.0965996 = sum of:
            0.0965996 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.0965996 = score(doc=3582,freq=4.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  4. Schleim, S.: Warum die Wissenschaft nicht frei ist (2017) 0.01
    0.0068306234 = product of:
      0.027322493 = sum of:
        0.027322493 = product of:
          0.054644987 = sum of:
            0.054644987 = weight(_text_:22 in 3882) [ClassicSimilarity], result of:
              0.054644987 = score(doc=3882,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.30952093 = fieldWeight in 3882, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3882)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    9.10.2017 15:48:22
  5. Krüger, N.; Pianos, T.: Lernmaterialien für junge Forschende in den Wirtschaftswissenschaften als Open Educational Resources (OER) (2021) 0.01
    0.0059767957 = product of:
      0.023907183 = sum of:
        0.023907183 = product of:
          0.047814365 = sum of:
            0.047814365 = weight(_text_:22 in 252) [ClassicSimilarity], result of:
              0.047814365 = score(doc=252,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.2708308 = fieldWeight in 252, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=252)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 5.2021 12:43:05
  6. Strecker, D.: Nutzung der Schattenbibliothek Sci-Hub in Deutschland (2019) 0.01
    0.0051229675 = product of:
      0.02049187 = sum of:
        0.02049187 = product of:
          0.04098374 = sum of:
            0.04098374 = weight(_text_:22 in 596) [ClassicSimilarity], result of:
              0.04098374 = score(doc=596,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23214069 = fieldWeight in 596, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=596)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    1. 1.2020 13:22:34
  7. Taglinger, H.: Ausgevogelt, jetzt wird es ernst (2018) 0.00
    0.00426914 = product of:
      0.01707656 = sum of:
        0.01707656 = product of:
          0.03415312 = sum of:
            0.03415312 = weight(_text_:22 in 4281) [ClassicSimilarity], result of:
              0.03415312 = score(doc=4281,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19345059 = fieldWeight in 4281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4281)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 1.2018 11:38:55
  8. Herb, U.: Überwachungskapitalismus und Wissenschaftssteuerung (2019) 0.00
    0.0034331365 = product of:
      0.013732546 = sum of:
        0.013732546 = weight(_text_:information in 5624) [ClassicSimilarity], result of:
          0.013732546 = score(doc=5624,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1551638 = fieldWeight in 5624, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5624)
      0.25 = coord(1/4)
    
    Content
    Der Text ist eine überarbeitete Version des von Herb, U. (2018): Zwangsehen und Bastarde : Wohin steuert Big Data die Wissenschaft? In: Information - Wissenschaft & Praxis, 69(2-3), S. 81-88. DOI:10.1515/iwp-2018-0021.
  9. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.00
    0.0034153117 = product of:
      0.013661247 = sum of:
        0.013661247 = product of:
          0.027322493 = sum of:
            0.027322493 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
              0.027322493 = score(doc=3608,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.15476047 = fieldWeight in 3608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3608)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
  10. Bailey, C.W. Jr.: Scholarly electronic publishing bibliography (2003) 0.00
    0.0025748524 = product of:
      0.01029941 = sum of:
        0.01029941 = weight(_text_:information in 1656) [ClassicSimilarity], result of:
          0.01029941 = score(doc=1656,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 1656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1656)
      0.25 = coord(1/4)
    
    Content
    Table of Contents 1 Economic Issues* 2 Electronic Books and Texts 2.1 Case Studies and History 2.2 General Works* 2.3 Library Issues* 3 Electronic Serials 3.1 Case Studies and History 3.2 Critiques 3.3 Electronic Distribution of Printed Journals 3.4 General Works* 3.5 Library Issues* 3.6 Research* 4 General Works* 5 Legal Issues 5.1 Intellectual Property Rights* 5.2 License Agreements 5.3 Other Legal Issues 6 Library Issues 6.1 Cataloging, Identifiers, Linking, and Metadata* 6.2 Digital Libraries* 6.3 General Works* 6.4 Information Integrity and Preservation* 7 New Publishing Models* 8 Publisher Issues 8.1 Digital Rights Management* 9 Repositories and E-Prints* Appendix A. Related Bibliographies by the Same Author Appendix B. About the Author
  11. Darnton, R.: Im Besitz des Wissens : Von der Gelehrtenrepublik des 18. Jahrhunderts zum digitalen Google-Monopol (2009) 0.00
    0.0025748524 = product of:
      0.01029941 = sum of:
        0.01029941 = weight(_text_:information in 2335) [ClassicSimilarity], result of:
          0.01029941 = score(doc=2335,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2335)
      0.25 = coord(1/4)
    
    Theme
    Information
  12. Pampel, H.: Empfehlungen für transformative Zeitschriftenverträge mit Publikationsdienstleistern veröffentlicht (2022) 0.00
    0.0025748524 = product of:
      0.01029941 = sum of:
        0.01029941 = weight(_text_:information in 805) [ClassicSimilarity], result of:
          0.01029941 = score(doc=805,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 805, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=805)
      0.25 = coord(1/4)
    
    Abstract
    Mailtext: "Im Rahmen der Schwerpunktinitiative "Digitale Information" der Allianz der Wissenschaftsorganisationen wurden jetzt "Empfehlungen für transformative Zeitschriftenverträge mit Publikationsdienstleistern" veröffentlicht. Die formulierten Kriterien dienen als gemeinsamer und handlungsleitender Rahmen der Akteur:innen aus allen Wissenschaftsorganisationen, d.h. Hochschulen ebenso wie außeruniversitäre Forschungseinrichtungen, für Verhandlungen mit Publikationsdienstleistern. Dabei bildet die Forderung nach größtmöglicher Kostentransparenz und Kosteneffizienz im Gesamtsystem den Kern des Handelns der Wissenschaftsorganisationen im Kontext ihrer Open-Access-Strategie für die Jahre 2021-2025. Diese Kriterien gliedern sich in die Aspekte Transformation von Zeitschriften, Preisgestaltung, Transparenz, Workflow, Preprints, Qualitätssicherung, Metadaten und Schnittstellen, Statistiken, Tracking und Waiver. Deutsche Version: https://doi.org/10.48440/allianzoa.045 Englische Version: https://doi.org/10.48440/allianzoa.046 Siehe auch: Empfehlungen für transformative Zeitschriftenverträge mit Publikationsdienstleistern veröffentlicht https://www.allianzinitiative.de/2022/11/24/empfehlungen-fuer-transformative-zeitschriftenvertraege-mit-publikationsdienstleistern-veroeffentlicht/ Recommendations for Transformative Journal Agreements with Providers of Publishing Services published https://www.allianzinitiative.de/2022/11/24/recommendations-for-transformative-journal-agreements-with-providers-of-publishing-services-published/?lang=en"
  13. Díaz, P.: Usability of hypermedia educational e-books (2003) 0.00
    0.002427594 = product of:
      0.009710376 = sum of:
        0.009710376 = weight(_text_:information in 1198) [ClassicSimilarity], result of:
          0.009710376 = score(doc=1198,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.10971737 = fieldWeight in 1198, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=1198)
      0.25 = coord(1/4)
    
    Abstract
    To arrive at relevant and reliable conclusions concerning the usability of a hypermedia educational e-book, developers have to apply a well-defined evaluation procedure as well as a set of clear, concrete and measurable quality criteria. Evaluating an educational tool involves not only testing the user interface but also the didactic method, the instructional materials and the interaction mechanisms to prove whether or not they help users reach their goals for learning. This article presents a number of evaluation criteria for hypermedia educational e-books and describes how they are embedded into an evaluation procedure. This work is chiefly aimed at helping education developers evaluate their systems, as well as to provide them with guidance for addressing educational requirements during the design process. In recent years, more and more educational e-books are being created, whether by academics trying to keep pace with the advanced requirements of the virtual university or by publishers seeking to meet the increasing demand for educational resources that can be accessed anywhere and anytime, and that include multimedia information, hypertext links and powerful search and annotating mechanisms. To develop a useful educational e-book many things have to be considered, such as the reading patterns of users, accessibility for different types of users and computer platforms, copyright and legal issues, development of new business models and so on. Addressing usability is very important since e-books are interactive systems and, consequently, have to be designed with the needs of their users in mind. Evaluating usability involves analyzing whether systems are effective, efficient and secure for use; easy to learn and remember; and have a good utility. Any interactive system, as e-books are, has to be assessed to determine if it is really usable as well as useful. Such an evaluation is not only concerned with assessing the user interface but is also aimed at analyzing whether the system can be used in an efficient way to meet the needs of its users - who in the case of educational e-books are learners and teachers. Evaluation provides the opportunity to gather valuable information about design decisions. However, to be successful the evaluation has to be carefully planned and prepared so developers collect appropriate and reliable data from which to draw relevant conclusions.
  14. Borghoff, U.M.; Rödig, P.; Schmalhofer, F.: DFG-Projekt Datenbankgestützte Langzeitarchivierung digitaler Objekte : Schlussbericht Juli 2005 - Geschäftszeichen 554 922(1) UV BW Mänchen (2005) 0.00
    0.0017165683 = product of:
      0.006866273 = sum of:
        0.006866273 = weight(_text_:information in 4250) [ClassicSimilarity], result of:
          0.006866273 = score(doc=4250,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0775819 = fieldWeight in 4250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4250)
      0.25 = coord(1/4)
    
    Abstract
    Über die letzten Jahrzehnte ist die Menge digitaler Publikationen exponentiell angestiegen. Doch die digitalen Bestände sind durch die schleichende Obsoletheit von Datenformaten, Software und Hardware bedroht. Aber auch die zunehmende Komplexität neuerer Dokumente und zugehöriger Abspielumgebungen stellt ein Problem dar. Das Thema der Langzeitarchivierung wurde lange vernachlässigt, rückt aber zunehmend ins Bewusstsein der Verantwortlichen und der Öffentlichkeit, nicht zuletzt wegen spektakulärer Datenverluste. Ziel dieser Studie ist es, Grundlagen und Bausteine für eine technische Lösung zu entwickeln und deren Einbettung in die Aufgabenbereiche einer Archivierungsorganisation aufzuzeigen. Es fehlt eine systematische Herangehensweise zum Aufbau technischen Wissens, die der Heterogenität und Komplexität sowie der bereits vorhandenen Obsoletheit in der Welt des digitalen Publizierens gerecht wird. In einem ersten Schritt entwickeln wir deshalb ein Modell, das sich spezifisch den technischen Aspekten digitaler Objekte widmet. Dieses Modell erlaubt es, digitale Objekte bezüglich der Archivierungsaspekte zu charakterisieren und zu klassifizieren sowie technische Grundlagen präzise zuzuordnen. Auf dieser Basis können u. a. systematisch modulare Metadatenschemata gewonnen werden, die den Langzeiterhalt gezielt unterstützen. Das Modell liefert außerdem einen Beitrag zur Formulierung von zugehörigen Ontologien. Des Weiteren fördern die Modularität der Metadatenschemata und die einheitliche Begrifflichkeit einer Ontologie die Föderation und Kooperation von Archivierungsorganisationen und -systemen. Die Abstützung auf das entwickelte Modell systematisiert in einem weiteren Schritt die Herleitung von technisch orientierten Prozessen zur Erfüllung von Archivierungsaufgaben. Der Entwicklung eines eigenen Modells liegt die Einschätzung zu Grunde, dass Referenzmodelle, wie OAIS (Open Archival Information System), zwar eine geeignete Ausgangsbasis auf konzeptioneller Ebene bieten, aber sie sind zu generell und beschreiben vor- oder nachgelagerte Prozesse nur als Schnittstelle. Die aus dem Modell hergeleiteten Lösungsansätze sind zunächst unabhängig von einer konkreten Realisierung. Als Beitrag zur Umsetzung wird in einem eigenen Abschnitt der Einsatz von Datenbankmanagementsystemen (DBMS) als Implementierungsbasis ausführlich diskutiert.
  15. Hobert, A.; Jahn, N.; Mayr, P.; Schmidt, B.; Taubert, N.: Open access uptake in Germany 2010-2018 : adoption in a diverse research landscape (2021) 0.00
    0.0017165683 = product of:
      0.006866273 = sum of:
        0.006866273 = weight(_text_:information in 250) [ClassicSimilarity], result of:
          0.006866273 = score(doc=250,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0775819 = fieldWeight in 250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
      0.25 = coord(1/4)
    
    Content
    This study investigates the development of open access (OA) to journal articles from authors affiliated with German universities and non-university research institutions in the period 2010-2018. Beyond determining the overall share of openly available articles, a systematic classification of distinct categories of OA publishing allowed us to identify different patterns of adoption of OA. Taking into account the particularities of the German research landscape, variations in terms of productivity, OA uptake and approaches to OA are examined at the meso-level and possible explanations are discussed. The development of the OA uptake is analysed for the different research sectors in Germany (universities, non-university research institutes of the Helmholtz Association, Fraunhofer Society, Max Planck Society, Leibniz Association, and government research agencies). Combining several data sources (incl. Web of Science, Unpaywall, an authority file of standardised German affiliation information, the ISSN-Gold-OA 3.0 list, and OpenDOAR), the study confirms the growth of the OA share mirroring the international trend reported in related studies. We found that 45% of all considered articles during the observed period were openly available at the time of analysis. Our findings show that subject-specific repositories are the most prevalent type of OA. However, the percentages for publication in fully OA journals and OA via institutional repositories show similarly steep increases. Enabling data-driven decision-making regarding the implementation of OA in Germany at the institutional level, the results of this study furthermore can serve as a baseline to assess the impact recent transformative agreements with major publishers will likely have on scholarly communication.
  16. Brand, A.: CrossRef turns one (2001) 0.00
    0.0012874262 = product of:
      0.005149705 = sum of:
        0.005149705 = weight(_text_:information in 1222) [ClassicSimilarity], result of:
          0.005149705 = score(doc=1222,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.058186423 = fieldWeight in 1222, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1222)
      0.25 = coord(1/4)
    
    Abstract
    CrossRef, the only full-blown application of the Digital Object Identifier (DOI®) System to date, is now a little over a year old. What started as a cooperative effort among publishers and technologists to prototype DOI-based linking of citations in e-journals evolved into an independent, non-profit enterprise in early 2000. We have made considerable headway during our first year, but there is still much to be done. When CrossRef went live with its collaborative linking service last June, it had enabled reference links in roughly 1,100 journals from a member base of 33 publishers, using a functional prototype system. The DOI-X prototype was described in an article published in D-Lib Magazine in February of 2000. On the occasion of CrossRef's first birthday as a live service, this article provides a non-technical overview of our progress to date and the major hurdles ahead. The electronic medium enriches the research literature arena for all players -- researchers, librarians, and publishers -- in numerous ways. Information has been made easier to discover, to share, and to sell. To take a simple example, the aggregation of book metadata by electronic booksellers was a huge boon to scholars seeking out obscure backlist titles, or discovering books they would never otherwise have known to exist. It was equally a boon for the publishers of those books, who saw an unprecedented surge in sales of backlist titles with the advent of centralized electronic bookselling. In the serials sphere, even in spite of price increases and the turmoil surrounding site licenses for some prime electronic content, libraries overall are now able to offer more content to more of their patrons. Yet undoubtedly, the key enrichment for academics and others navigating a scholarly corpus is linking, and in particular the linking that takes the reader out of one document and into another in the matter of a click or two. Since references are how authors make explicit the links between their work and precedent scholarship, what could be more fundamental to the reader than making those links immediately actionable? That said, automated linking is only really useful from a research perspective if it works across publications and across publishers. Not only do academics think about their own writings and those of their colleagues in terms of "author, title, rough date" -- the name of the journal itself is usually not high on the list of crucial identifying features -- but they are oblivious as to the identity of the publishers of all but their very favorite books and journals.