Search (4 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Elektronisches Publizieren"
  • × type_ss:"el"
  1. Hobert, A.; Jahn, N.; Mayr, P.; Schmidt, B.; Taubert, N.: Open access uptake in Germany 2010-2018 : adoption in a diverse research landscape (2021) 0.01
    0.014289798 = product of:
      0.057159193 = sum of:
        0.0105139585 = weight(_text_:und in 250) [ClassicSimilarity], result of:
          0.0105139585 = score(doc=250,freq=8.0), product of:
            0.05366975 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024215192 = queryNorm
            0.19590102 = fieldWeight in 250, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
        0.0105139585 = weight(_text_:und in 250) [ClassicSimilarity], result of:
          0.0105139585 = score(doc=250,freq=8.0), product of:
            0.05366975 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024215192 = queryNorm
            0.19590102 = fieldWeight in 250, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
        0.015103361 = weight(_text_:der in 250) [ClassicSimilarity], result of:
          0.015103361 = score(doc=250,freq=16.0), product of:
            0.054091092 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.024215192 = queryNorm
            0.27922085 = fieldWeight in 250, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
        0.0105139585 = weight(_text_:und in 250) [ClassicSimilarity], result of:
          0.0105139585 = score(doc=250,freq=8.0), product of:
            0.05366975 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024215192 = queryNorm
            0.19590102 = fieldWeight in 250, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
        0.0105139585 = weight(_text_:und in 250) [ClassicSimilarity], result of:
          0.0105139585 = score(doc=250,freq=8.0), product of:
            0.05366975 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024215192 = queryNorm
            0.19590102 = fieldWeight in 250, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.03125 = fieldNorm(doc=250)
      0.25 = coord(5/20)
    
    Abstract
    Es handelt sich um eine bibliometrische Untersuchung der Entwicklung der Open-Access-Verfügbarkeit wissenschaftlicher Zeitschriftenartikel in Deutschland, die im Zeitraum 2010-18 erschienen und im Web of Science indexiert sind. Ein besonderes Augenmerk der Analyse lag auf der Frage, ob und inwiefern sich die Open-Access-Profile der Universitäten und außeruniversitären Wissenschaftseinrichtungen in Deutschland voneinander unterscheiden.
    Footnote
    Den Aufsatz begleitet ein interaktives Datensupplement, mit dem sich die OA-Anteile auf Ebene der Einrichtung vergleichen lassen. https://subugoe.github.io/oauni/articles/supplement.html. Die Arbeit entstand in Zusammenarbeit der BMBF-Projekte OAUNI und OASE der Förderlinie "Quantitative Wissenschaftsforschung". https://www.wihoforschung.de/de/quantitative-wissenschaftsforschung-1573.php.
  2. Dobratz, S.; Neuroth, H.: nestor: Network of Expertise in long-term STOrage of digital Resources : a digital preservation initiative for Germany (2004) 0.00
    0.0031541877 = product of:
      0.015770938 = sum of:
        0.0039427346 = weight(_text_:und in 1195) [ClassicSimilarity], result of:
          0.0039427346 = score(doc=1195,freq=2.0), product of:
            0.05366975 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024215192 = queryNorm
            0.07346288 = fieldWeight in 1195, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1195)
        0.0039427346 = weight(_text_:und in 1195) [ClassicSimilarity], result of:
          0.0039427346 = score(doc=1195,freq=2.0), product of:
            0.05366975 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024215192 = queryNorm
            0.07346288 = fieldWeight in 1195, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1195)
        0.0039427346 = weight(_text_:und in 1195) [ClassicSimilarity], result of:
          0.0039427346 = score(doc=1195,freq=2.0), product of:
            0.05366975 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024215192 = queryNorm
            0.07346288 = fieldWeight in 1195, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1195)
        0.0039427346 = weight(_text_:und in 1195) [ClassicSimilarity], result of:
          0.0039427346 = score(doc=1195,freq=2.0), product of:
            0.05366975 = queryWeight, product of:
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.024215192 = queryNorm
            0.07346288 = fieldWeight in 1195, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.216367 = idf(docFreq=13101, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1195)
      0.2 = coord(4/20)
    
    Abstract
    Sponsored by the German Ministry of Education and Research with funding of 800.000 EURO, the German Network of Expertise in long-term storage of digital resources (nestor) began in June 2003 as a cooperative effort of 6 partners representing different players within the field of long-term preservation. The partners include: * The German National Library (Die Deutsche Bibliothek) as the lead institution for the project * The State and University Library of Lower Saxony Göttingen (Staats- und Universitätsbibliothek Göttingen) * The Computer and Media Service and the University Library of Humboldt-University Berlin (Humboldt-Universität zu Berlin) * The Bavarian State Library in Munich (Bayerische Staatsbibliothek) * The Institute for Museum Information in Berlin (Institut für Museumskunde) * General Directorate of the Bavarian State Archives (GDAB) As in other countries, long-term preservation of digital resources has become an important issue in Germany in recent years. Nevertheless, coming to agreement with institutions throughout the country to cooperate on tasks for a long-term preservation effort has taken a great deal of effort. Although there had been considerable attention paid to the preservation of physical media like CD-ROMS, technologies available for the long-term preservation of digital publications like e-books, digital dissertations, websites, etc., are still lacking. Considering the importance of the task within the federal structure of Germany, with the responsibility of each federal state for its science and culture activities, it is obvious that the approach to a successful solution of these issues in Germany must be a cooperative approach. Since 2000, there have been discussions about strategies and techniques for long-term archiving of digital information, particularly within the distributed structure of Germany's library and archival institutions. A key part of all the previous activities was focusing on using existing standards and analyzing the context in which those standards would be applied. One such activity, the Digital Library Forum Planning Project, was done on behalf of the German Ministry of Education and Research in 2002, where the vision of a digital library in 2010 that can meet the changing and increasing needs of users was developed and described in detail, including the infrastructure required and how the digital library would work technically, what it would contain and how it would be organized. The outcome was a strategic plan for certain selected specialist areas, where, amongst other topics, a future call for action for long-term preservation was defined, described and explained against the background of practical experience.
  3. Zhang, A.: Multimedia file formats on the Internet : a beginner's guide for PC users (1995) 0.00
    8.0097665E-4 = product of:
      0.016019532 = sum of:
        0.016019532 = weight(_text_:der in 3212) [ClassicSimilarity], result of:
          0.016019532 = score(doc=3212,freq=2.0), product of:
            0.054091092 = queryWeight, product of:
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.024215192 = queryNorm
            0.29615843 = fieldWeight in 3212, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.2337668 = idf(docFreq=12875, maxDocs=44218)
              0.09375 = fieldNorm(doc=3212)
      0.05 = coord(1/20)
    
    Abstract
    Darstellung der verschiedenen Dateiformate, wie sie im Internet verwendet werden sowie die Möglichkeiten, die Dateien zu nutzen (einschl. Angaben zu Software etc.)
  4. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.00
    3.2808242E-4 = product of:
      0.006561648 = sum of:
        0.006561648 = product of:
          0.013123296 = sum of:
            0.013123296 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
              0.013123296 = score(doc=3608,freq=2.0), product of:
                0.08479747 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024215192 = queryNorm
                0.15476047 = fieldWeight in 3608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3608)
          0.5 = coord(1/2)
      0.05 = coord(1/20)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.