Search (55 results, page 1 of 3)

  • × type_ss:"el"
  • × theme_ss:"Elektronisches Publizieren"
  1. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.02
    0.020493407 = product of:
      0.051233515 = sum of:
        0.0068111527 = weight(_text_:a in 3582) [ClassicSimilarity], result of:
          0.0068111527 = score(doc=3582,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.12739488 = fieldWeight in 3582, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=3582)
        0.044422362 = product of:
          0.088844724 = sum of:
            0.088844724 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.088844724 = score(doc=3582,freq=4.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
    Type
    a
  2. Schleim, S.: Warum die Wissenschaft nicht frei ist (2017) 0.01
    0.012231203 = product of:
      0.030578006 = sum of:
        0.005448922 = weight(_text_:a in 3882) [ClassicSimilarity], result of:
          0.005448922 = score(doc=3882,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 3882, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=3882)
        0.025129084 = product of:
          0.050258167 = sum of:
            0.050258167 = weight(_text_:22 in 3882) [ClassicSimilarity], result of:
              0.050258167 = score(doc=3882,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.30952093 = fieldWeight in 3882, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3882)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    9.10.2017 15:48:22
    Type
    a
  3. Krüger, N.; Pianos, T.: Lernmaterialien für junge Forschende in den Wirtschaftswissenschaften als Open Educational Resources (OER) (2021) 0.01
    0.010702303 = product of:
      0.026755756 = sum of:
        0.004767807 = weight(_text_:a in 252) [ClassicSimilarity], result of:
          0.004767807 = score(doc=252,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.089176424 = fieldWeight in 252, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=252)
        0.021987949 = product of:
          0.043975897 = sum of:
            0.043975897 = weight(_text_:22 in 252) [ClassicSimilarity], result of:
              0.043975897 = score(doc=252,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.2708308 = fieldWeight in 252, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=252)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 5.2021 12:43:05
    Type
    a
  4. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.01
    0.009246535 = product of:
      0.023116335 = sum of:
        0.010551793 = weight(_text_:a in 3608) [ClassicSimilarity], result of:
          0.010551793 = score(doc=3608,freq=30.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.19735932 = fieldWeight in 3608, product of:
              5.477226 = tf(freq=30.0), with freq of:
                30.0 = termFreq=30.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.012564542 = product of:
          0.025129084 = sum of:
            0.025129084 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
              0.025129084 = score(doc=3608,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.15476047 = fieldWeight in 3608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3608)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
    Type
    a
  5. Strecker, D.: Nutzung der Schattenbibliothek Sci-Hub in Deutschland (2019) 0.01
    0.009173402 = product of:
      0.022933504 = sum of:
        0.004086692 = weight(_text_:a in 596) [ClassicSimilarity], result of:
          0.004086692 = score(doc=596,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=596)
        0.018846812 = product of:
          0.037693623 = sum of:
            0.037693623 = weight(_text_:22 in 596) [ClassicSimilarity], result of:
              0.037693623 = score(doc=596,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23214069 = fieldWeight in 596, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=596)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    1. 1.2020 13:22:34
    Type
    a
  6. Taglinger, H.: Ausgevogelt, jetzt wird es ernst (2018) 0.01
    0.0076445015 = product of:
      0.019111253 = sum of:
        0.0034055763 = weight(_text_:a in 4281) [ClassicSimilarity], result of:
          0.0034055763 = score(doc=4281,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.06369744 = fieldWeight in 4281, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4281)
        0.015705677 = product of:
          0.031411353 = sum of:
            0.031411353 = weight(_text_:22 in 4281) [ClassicSimilarity], result of:
              0.031411353 = score(doc=4281,freq=2.0), product of:
                0.16237405 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046368346 = queryNorm
                0.19345059 = fieldWeight in 4281, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4281)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Date
    22. 1.2018 11:38:55
    Type
    a
  7. Brand, A.: CrossRef turns one (2001) 0.00
    0.004867076 = product of:
      0.012167689 = sum of:
        0.009799543 = weight(_text_:a in 1222) [ClassicSimilarity], result of:
          0.009799543 = score(doc=1222,freq=46.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18328933 = fieldWeight in 1222, product of:
              6.78233 = tf(freq=46.0), with freq of:
                46.0 = termFreq=46.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1222)
        0.0023681468 = product of:
          0.0047362936 = sum of:
            0.0047362936 = weight(_text_:information in 1222) [ClassicSimilarity], result of:
              0.0047362936 = score(doc=1222,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.058186423 = fieldWeight in 1222, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1222)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    CrossRef, the only full-blown application of the Digital Object Identifier (DOI®) System to date, is now a little over a year old. What started as a cooperative effort among publishers and technologists to prototype DOI-based linking of citations in e-journals evolved into an independent, non-profit enterprise in early 2000. We have made considerable headway during our first year, but there is still much to be done. When CrossRef went live with its collaborative linking service last June, it had enabled reference links in roughly 1,100 journals from a member base of 33 publishers, using a functional prototype system. The DOI-X prototype was described in an article published in D-Lib Magazine in February of 2000. On the occasion of CrossRef's first birthday as a live service, this article provides a non-technical overview of our progress to date and the major hurdles ahead. The electronic medium enriches the research literature arena for all players -- researchers, librarians, and publishers -- in numerous ways. Information has been made easier to discover, to share, and to sell. To take a simple example, the aggregation of book metadata by electronic booksellers was a huge boon to scholars seeking out obscure backlist titles, or discovering books they would never otherwise have known to exist. It was equally a boon for the publishers of those books, who saw an unprecedented surge in sales of backlist titles with the advent of centralized electronic bookselling. In the serials sphere, even in spite of price increases and the turmoil surrounding site licenses for some prime electronic content, libraries overall are now able to offer more content to more of their patrons. Yet undoubtedly, the key enrichment for academics and others navigating a scholarly corpus is linking, and in particular the linking that takes the reader out of one document and into another in the matter of a click or two. Since references are how authors make explicit the links between their work and precedent scholarship, what could be more fundamental to the reader than making those links immediately actionable? That said, automated linking is only really useful from a research perspective if it works across publications and across publishers. Not only do academics think about their own writings and those of their colleagues in terms of "author, title, rough date" -- the name of the journal itself is usually not high on the list of crucial identifying features -- but they are oblivious as to the identity of the publishers of all but their very favorite books and journals.
    Citation linking is thus also a huge benefit to journal publishers, because, as with electronic bookselling, it drives readers to their content in yet another way. In step with what was largely a subscription-based economy for journal sales, an "article economy" appears to be emerging. Journal publishers sell an increasing amount of their content on an article basis, whether through document delivery services, aggregators, or their own pay-per-view systems. At the same time, most research-oriented access to digitized material is still mediated by libraries. Resource discovery services must be able to authenticate subscribed or licensed users somewhere in the process, and ensure that a given user is accessing as a default the version of an article that their library may have already paid for. The well-known "appropriate copy" issue is addressed below. Another benefit to publishers from including outgoing citation links is simply the value they can add to their own journals. Publishers carry out the bulk of the technological prototyping and development that has produced electronic journals and the enhanced functionality readers have come to expect. There is clearly competition among them to provide readers with the latest features. That a number of publishers would agree to collaborate in the establishment of an infrastructure for reference linking was thus by no means predictable. CrossRef was incorporated in January of 2000 as a collaborative venture among 12 of the world's top scientific and scholarly publishers, both commercial and not-for-profit, to enable cross-publisher reference linking throughout the digital journal literature. The founding members were Academic Press, a Harcourt Company; the American Association for the Advancement of Science (the publisher of Science); American Institute of Physics (AIP); Association for Computing Machinery (ACM); Blackwell Science; Elsevier Science; The Institute of Electrical and Electronics Engineers, Inc. (IEEE); Kluwer Academic Publishers (a Wolters Kluwer Company); Nature; Oxford University Press; Springer-Verlag; and John Wiley & Sons, Inc. Start-up funds for CrossRef were provided as loans from eight of the original publishers.
    Type
    a
  8. Herb, U.: Überwachungskapitalismus und Wissenschaftssteuerung (2019) 0.00
    0.0047055925 = product of:
      0.011763981 = sum of:
        0.005448922 = weight(_text_:a in 5624) [ClassicSimilarity], result of:
          0.005448922 = score(doc=5624,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.10191591 = fieldWeight in 5624, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0625 = fieldNorm(doc=5624)
        0.006315058 = product of:
          0.012630116 = sum of:
            0.012630116 = weight(_text_:information in 5624) [ClassicSimilarity], result of:
              0.012630116 = score(doc=5624,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.1551638 = fieldWeight in 5624, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5624)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Der Text ist eine überarbeitete Version des von Herb, U. (2018): Zwangsehen und Bastarde : Wohin steuert Big Data die Wissenschaft? In: Information - Wissenschaft & Praxis, 69(2-3), S. 81-88. DOI:10.1515/iwp-2018-0021.
    Type
    a
  9. Díaz, P.: Usability of hypermedia educational e-books (2003) 0.00
    0.0046694665 = product of:
      0.011673667 = sum of:
        0.0072082467 = weight(_text_:a in 1198) [ClassicSimilarity], result of:
          0.0072082467 = score(doc=1198,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13482209 = fieldWeight in 1198, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=1198)
        0.0044654203 = product of:
          0.0089308405 = sum of:
            0.0089308405 = weight(_text_:information in 1198) [ClassicSimilarity], result of:
              0.0089308405 = score(doc=1198,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.10971737 = fieldWeight in 1198, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1198)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    To arrive at relevant and reliable conclusions concerning the usability of a hypermedia educational e-book, developers have to apply a well-defined evaluation procedure as well as a set of clear, concrete and measurable quality criteria. Evaluating an educational tool involves not only testing the user interface but also the didactic method, the instructional materials and the interaction mechanisms to prove whether or not they help users reach their goals for learning. This article presents a number of evaluation criteria for hypermedia educational e-books and describes how they are embedded into an evaluation procedure. This work is chiefly aimed at helping education developers evaluate their systems, as well as to provide them with guidance for addressing educational requirements during the design process. In recent years, more and more educational e-books are being created, whether by academics trying to keep pace with the advanced requirements of the virtual university or by publishers seeking to meet the increasing demand for educational resources that can be accessed anywhere and anytime, and that include multimedia information, hypertext links and powerful search and annotating mechanisms. To develop a useful educational e-book many things have to be considered, such as the reading patterns of users, accessibility for different types of users and computer platforms, copyright and legal issues, development of new business models and so on. Addressing usability is very important since e-books are interactive systems and, consequently, have to be designed with the needs of their users in mind. Evaluating usability involves analyzing whether systems are effective, efficient and secure for use; easy to learn and remember; and have a good utility. Any interactive system, as e-books are, has to be assessed to determine if it is really usable as well as useful. Such an evaluation is not only concerned with assessing the user interface but is also aimed at analyzing whether the system can be used in an efficient way to meet the needs of its users - who in the case of educational e-books are learners and teachers. Evaluation provides the opportunity to gather valuable information about design decisions. However, to be successful the evaluation has to be carefully planned and prepared so developers collect appropriate and reliable data from which to draw relevant conclusions.
    Type
    a
  10. Dobratz, S.; Neuroth, H.: nestor: Network of Expertise in long-term STOrage of digital Resources : a digital preservation initiative for Germany (2004) 0.00
    0.004286581 = product of:
      0.010716453 = sum of:
        0.007367388 = weight(_text_:a in 1195) [ClassicSimilarity], result of:
          0.007367388 = score(doc=1195,freq=26.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13779864 = fieldWeight in 1195, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1195)
        0.0033490653 = product of:
          0.0066981306 = sum of:
            0.0066981306 = weight(_text_:information in 1195) [ClassicSimilarity], result of:
              0.0066981306 = score(doc=1195,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.08228803 = fieldWeight in 1195, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1195)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Sponsored by the German Ministry of Education and Research with funding of 800.000 EURO, the German Network of Expertise in long-term storage of digital resources (nestor) began in June 2003 as a cooperative effort of 6 partners representing different players within the field of long-term preservation. The partners include: * The German National Library (Die Deutsche Bibliothek) as the lead institution for the project * The State and University Library of Lower Saxony Göttingen (Staats- und Universitätsbibliothek Göttingen) * The Computer and Media Service and the University Library of Humboldt-University Berlin (Humboldt-Universität zu Berlin) * The Bavarian State Library in Munich (Bayerische Staatsbibliothek) * The Institute for Museum Information in Berlin (Institut für Museumskunde) * General Directorate of the Bavarian State Archives (GDAB) As in other countries, long-term preservation of digital resources has become an important issue in Germany in recent years. Nevertheless, coming to agreement with institutions throughout the country to cooperate on tasks for a long-term preservation effort has taken a great deal of effort. Although there had been considerable attention paid to the preservation of physical media like CD-ROMS, technologies available for the long-term preservation of digital publications like e-books, digital dissertations, websites, etc., are still lacking. Considering the importance of the task within the federal structure of Germany, with the responsibility of each federal state for its science and culture activities, it is obvious that the approach to a successful solution of these issues in Germany must be a cooperative approach. Since 2000, there have been discussions about strategies and techniques for long-term archiving of digital information, particularly within the distributed structure of Germany's library and archival institutions. A key part of all the previous activities was focusing on using existing standards and analyzing the context in which those standards would be applied. One such activity, the Digital Library Forum Planning Project, was done on behalf of the German Ministry of Education and Research in 2002, where the vision of a digital library in 2010 that can meet the changing and increasing needs of users was developed and described in detail, including the infrastructure required and how the digital library would work technically, what it would contain and how it would be organized. The outcome was a strategic plan for certain selected specialist areas, where, amongst other topics, a future call for action for long-term preservation was defined, described and explained against the background of practical experience.
    As follow up, in 2002 the nestor long-term archiving working group provided an initial spark towards planning and organising coordinated activities concerning the long-term preservation and long-term availability of digital documents in Germany. This resulted in a workshop, held 29 - 30 October 2002, where major tasks were discussed. Influenced by the demands and progress of the nestor network, the participants reached agreement to start work on application-oriented projects and to address the following topics: * Overlapping problems o Collection and preservation of digital objects (selection criteria, preservation policy) o Definition of criteria for trusted repositories o Creation of models of cooperation, etc. * Digital objects production process o Analysis of potential conflicts between production and long-term preservation o Documentation of existing document models and recommendations for standards models to be used for long-term preservation o Identification systems for digital objects, etc. * Transfer of digital objects o Object data and metadata o Transfer protocols and interoperability o Handling of different document types, e.g. dynamic publications, etc. * Long-term preservation of digital objects o Design and prototype implementation of depot systems for digital objects (OAIS was chosen to be the best functional model.) o Authenticity o Functional requirements on user interfaces of an depot system o Identification systems for digital objects, etc. At the end of the workshop, participants decided to establish a permanent distributed infrastructure for long-term preservation and long-term accessibility of digital resources in Germany comparable, e.g., to the Digital Preservation Coalition in the UK. The initial phase, nestor, is now being set up by the above-mentioned 3-year funding project.
    Type
    a
  11. Hobert, A.; Jahn, N.; Mayr, P.; Schmidt, B.; Taubert, N.: Open access uptake in Germany 2010-2018 : adoption in a diverse research landscape (2021) 0.00
    0.003699844 = product of:
      0.00924961 = sum of:
        0.006092081 = weight(_text_:a in 250) [ClassicSimilarity], result of:
          0.006092081 = score(doc=250,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.11394546 = fieldWeight in 250, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
        0.003157529 = product of:
          0.006315058 = sum of:
            0.006315058 = weight(_text_:information in 250) [ClassicSimilarity], result of:
              0.006315058 = score(doc=250,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0775819 = fieldWeight in 250, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=250)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    This study investigates the development of open access (OA) to journal articles from authors affiliated with German universities and non-university research institutions in the period 2010-2018. Beyond determining the overall share of openly available articles, a systematic classification of distinct categories of OA publishing allowed us to identify different patterns of adoption of OA. Taking into account the particularities of the German research landscape, variations in terms of productivity, OA uptake and approaches to OA are examined at the meso-level and possible explanations are discussed. The development of the OA uptake is analysed for the different research sectors in Germany (universities, non-university research institutes of the Helmholtz Association, Fraunhofer Society, Max Planck Society, Leibniz Association, and government research agencies). Combining several data sources (incl. Web of Science, Unpaywall, an authority file of standardised German affiliation information, the ISSN-Gold-OA 3.0 list, and OpenDOAR), the study confirms the growth of the OA share mirroring the international trend reported in related studies. We found that 45% of all considered articles during the observed period were openly available at the time of analysis. Our findings show that subject-specific repositories are the most prevalent type of OA. However, the percentages for publication in fully OA journals and OA via institutional repositories show similarly steep increases. Enabling data-driven decision-making regarding the implementation of OA in Germany at the institutional level, the results of this study furthermore can serve as a baseline to assess the impact recent transformative agreements with major publishers will likely have on scholarly communication.
    Type
    a
  12. Bailey, C.W. Jr.: Scholarly electronic publishing bibliography (2003) 0.00
    0.003529194 = product of:
      0.008822985 = sum of:
        0.004086692 = weight(_text_:a in 1656) [ClassicSimilarity], result of:
          0.004086692 = score(doc=1656,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 1656, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=1656)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 1656) [ClassicSimilarity], result of:
              0.009472587 = score(doc=1656,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 1656, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1656)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Table of Contents 1 Economic Issues* 2 Electronic Books and Texts 2.1 Case Studies and History 2.2 General Works* 2.3 Library Issues* 3 Electronic Serials 3.1 Case Studies and History 3.2 Critiques 3.3 Electronic Distribution of Printed Journals 3.4 General Works* 3.5 Library Issues* 3.6 Research* 4 General Works* 5 Legal Issues 5.1 Intellectual Property Rights* 5.2 License Agreements 5.3 Other Legal Issues 6 Library Issues 6.1 Cataloging, Identifiers, Linking, and Metadata* 6.2 Digital Libraries* 6.3 General Works* 6.4 Information Integrity and Preservation* 7 New Publishing Models* 8 Publisher Issues 8.1 Digital Rights Management* 9 Repositories and E-Prints* Appendix A. Related Bibliographies by the Same Author Appendix B. About the Author
  13. Darnton, R.: Im Besitz des Wissens : Von der Gelehrtenrepublik des 18. Jahrhunderts zum digitalen Google-Monopol (2009) 0.00
    0.003529194 = product of:
      0.008822985 = sum of:
        0.004086692 = weight(_text_:a in 2335) [ClassicSimilarity], result of:
          0.004086692 = score(doc=2335,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.07643694 = fieldWeight in 2335, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=2335)
        0.0047362936 = product of:
          0.009472587 = sum of:
            0.009472587 = weight(_text_:information in 2335) [ClassicSimilarity], result of:
              0.009472587 = score(doc=2335,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.116372846 = fieldWeight in 2335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2335)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Theme
    Information
    Type
    a
  14. Academic publishing : No peeking (2014) 0.00
    0.0026970792 = product of:
      0.013485395 = sum of:
        0.013485395 = weight(_text_:a in 805) [ClassicSimilarity], result of:
          0.013485395 = score(doc=805,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.25222903 = fieldWeight in 805, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=805)
      0.2 = coord(1/5)
    
    Abstract
    A publishing giant goes after the authors of its journals' papers
    Type
    a
  15. Borghoff, U.M.; Rödig, P.; Schmalhofer, F.: DFG-Projekt Datenbankgestützte Langzeitarchivierung digitaler Objekte : Schlussbericht Juli 2005 - Geschäftszeichen 554 922(1) UV BW Mänchen (2005) 0.00
    0.0023527963 = product of:
      0.0058819903 = sum of:
        0.002724461 = weight(_text_:a in 4250) [ClassicSimilarity], result of:
          0.002724461 = score(doc=4250,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.050957955 = fieldWeight in 4250, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4250)
        0.003157529 = product of:
          0.006315058 = sum of:
            0.006315058 = weight(_text_:information in 4250) [ClassicSimilarity], result of:
              0.006315058 = score(doc=4250,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.0775819 = fieldWeight in 4250, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4250)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Über die letzten Jahrzehnte ist die Menge digitaler Publikationen exponentiell angestiegen. Doch die digitalen Bestände sind durch die schleichende Obsoletheit von Datenformaten, Software und Hardware bedroht. Aber auch die zunehmende Komplexität neuerer Dokumente und zugehöriger Abspielumgebungen stellt ein Problem dar. Das Thema der Langzeitarchivierung wurde lange vernachlässigt, rückt aber zunehmend ins Bewusstsein der Verantwortlichen und der Öffentlichkeit, nicht zuletzt wegen spektakulärer Datenverluste. Ziel dieser Studie ist es, Grundlagen und Bausteine für eine technische Lösung zu entwickeln und deren Einbettung in die Aufgabenbereiche einer Archivierungsorganisation aufzuzeigen. Es fehlt eine systematische Herangehensweise zum Aufbau technischen Wissens, die der Heterogenität und Komplexität sowie der bereits vorhandenen Obsoletheit in der Welt des digitalen Publizierens gerecht wird. In einem ersten Schritt entwickeln wir deshalb ein Modell, das sich spezifisch den technischen Aspekten digitaler Objekte widmet. Dieses Modell erlaubt es, digitale Objekte bezüglich der Archivierungsaspekte zu charakterisieren und zu klassifizieren sowie technische Grundlagen präzise zuzuordnen. Auf dieser Basis können u. a. systematisch modulare Metadatenschemata gewonnen werden, die den Langzeiterhalt gezielt unterstützen. Das Modell liefert außerdem einen Beitrag zur Formulierung von zugehörigen Ontologien. Des Weiteren fördern die Modularität der Metadatenschemata und die einheitliche Begrifflichkeit einer Ontologie die Föderation und Kooperation von Archivierungsorganisationen und -systemen. Die Abstützung auf das entwickelte Modell systematisiert in einem weiteren Schritt die Herleitung von technisch orientierten Prozessen zur Erfüllung von Archivierungsaufgaben. Der Entwicklung eines eigenen Modells liegt die Einschätzung zu Grunde, dass Referenzmodelle, wie OAIS (Open Archival Information System), zwar eine geeignete Ausgangsbasis auf konzeptioneller Ebene bieten, aber sie sind zu generell und beschreiben vor- oder nachgelagerte Prozesse nur als Schnittstelle. Die aus dem Modell hergeleiteten Lösungsansätze sind zunächst unabhängig von einer konkreten Realisierung. Als Beitrag zur Umsetzung wird in einem eigenen Abschnitt der Einsatz von Datenbankmanagementsystemen (DBMS) als Implementierungsbasis ausführlich diskutiert.
  16. Fallaw, C.; Dunham, E.; Wickes, E.; Strong, D.; Stein, A.; Zhang, Q.; Rimkus, K.; ill Ingram, B.; Imker, H.J.: Overly honest data repository development (2016) 0.00
    0.0023357389 = product of:
      0.011678694 = sum of:
        0.011678694 = weight(_text_:a in 3371) [ClassicSimilarity], result of:
          0.011678694 = score(doc=3371,freq=12.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.21843673 = fieldWeight in 3371, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3371)
      0.2 = coord(1/5)
    
    Abstract
    After a year of development, the library at the University of Illinois at Urbana-Champaign has launched a repository, called the Illinois Data Bank (https://databank.illinois.edu/), to provide Illinois researchers with a free, self-serve publishing platform that centralizes, preserves, and provides persistent and reliable access to Illinois research data. This article presents a holistic view of development by discussing our overarching technical, policy, and interface strategies. By openly presenting our design decisions, the rationales behind those decisions, and associated challenges this paper aims to contribute to the library community's work to develop repository services that meet growing data preservation and sharing needs.
    Type
    a
  17. Zhang, A.: Multimedia file formats on the Internet : a beginner's guide for PC users (1995) 0.00
    0.002311782 = product of:
      0.01155891 = sum of:
        0.01155891 = weight(_text_:a in 3212) [ClassicSimilarity], result of:
          0.01155891 = score(doc=3212,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.2161963 = fieldWeight in 3212, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=3212)
      0.2 = coord(1/5)
    
  18. Erkal, E.: Allegations linking Sci-Hub with Russian intelligence (2019) 0.00
    0.0019264851 = product of:
      0.009632425 = sum of:
        0.009632425 = weight(_text_:a in 4625) [ClassicSimilarity], result of:
          0.009632425 = score(doc=4625,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.18016359 = fieldWeight in 4625, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.078125 = fieldNorm(doc=4625)
      0.2 = coord(1/5)
    
    Abstract
    The Washington Post reports that the US Justice Department has launched a criminal and intelligence investigation into Alexandra Elbakyan, founder of Sci-Hub
    Type
    a
  19. Jochum, U.: Donald Trump und der bibliothekarisch-bürokratische Allianzkomplex : Geschrieben von Uwe Jochum am 19.2.2017, (2017) 0.00
    0.0019071229 = product of:
      0.009535614 = sum of:
        0.009535614 = weight(_text_:a in 3446) [ClassicSimilarity], result of:
          0.009535614 = score(doc=3446,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.17835285 = fieldWeight in 3446, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.109375 = fieldNorm(doc=3446)
      0.2 = coord(1/5)
    
    Type
    a
  20. Publish and don't be damned : some science journals that claim to peer review papers do not do so (2018) 0.00
    0.0018276243 = product of:
      0.009138121 = sum of:
        0.009138121 = weight(_text_:a in 4333) [ClassicSimilarity], result of:
          0.009138121 = score(doc=4333,freq=10.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1709182 = fieldWeight in 4333, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046875 = fieldNorm(doc=4333)
      0.2 = coord(1/5)
    
    Content
    "Whether to get a promotion or merely a foot in the door, academics have long known that they must publish papers, typically the more the better. Tallying scholarly publications to evaluate their authors has been common since the invention of scientific journals in the 17th century. So, too, has the practice of journal editors asking independent, usually anonymous, experts to scrutinise manuscripts and reject those deemed flawed-a quality-control process now known as peer review. Of late, however, this habit of according importance to papers labelled as "peer reviewed" has become something of a gamble. A rising number of journals that claim to review submissions in this way do not bother to do so. Not coincidentally, this seems to be leading some academics to inflate their publication lists with papers that might not pass such scrutiny."