Search (33 results, page 1 of 2)

  • × theme_ss:"Elektronisches Publizieren"
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Somers, J.: Torching the modern-day library of Alexandria : somewhere at Google there is a database containing 25 million books and nobody is allowed to read them. (2017) 0.05
    0.047296606 = product of:
      0.06306214 = sum of:
        0.023270661 = weight(_text_:web in 3608) [ClassicSimilarity], result of:
          0.023270661 = score(doc=3608,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.14422815 = fieldWeight in 3608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.026394749 = weight(_text_:search in 3608) [ClassicSimilarity], result of:
          0.026394749 = score(doc=3608,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.15360467 = fieldWeight in 3608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=3608)
        0.013396727 = product of:
          0.026793454 = sum of:
            0.026793454 = weight(_text_:22 in 3608) [ClassicSimilarity], result of:
              0.026793454 = score(doc=3608,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.15476047 = fieldWeight in 3608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3608)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    You were going to get one-click access to the full text of nearly every book that's ever been published. Books still in print you'd have to pay for, but everything else-a collection slated to grow larger than the holdings at the Library of Congress, Harvard, the University of Michigan, at any of the great national libraries of Europe-would have been available for free at terminals that were going to be placed in every local library that wanted one. At the terminal you were going to be able to search tens of millions of books and read every page of any book you found. You'd be able to highlight passages and make annotations and share them; for the first time, you'd be able to pinpoint an idea somewhere inside the vastness of the printed record, and send somebody straight to it with a link. Books would become as instantly available, searchable, copy-pasteable-as alive in the digital world-as web pages. It was to be the realization of a long-held dream. "The universal library has been talked about for millennia," Richard Ovenden, the head of Oxford's Bodleian Libraries, has said. "It was possible to think in the Renaissance that you might be able to amass the whole of published knowledge in a single room or a single institution." In the spring of 2011, it seemed we'd amassed it in a terminal small enough to fit on a desk. "This is a watershed event and can serve as a catalyst for the reinvention of education, research, and intellectual life," one eager observer wrote at the time. On March 22 of that year, however, the legal agreement that would have unlocked a century's worth of books and peppered the country with access terminals to a universal library was rejected under Rule 23(e)(2) of the Federal Rules of Civil Procedure by the U.S. District Court for the Southern District of New York. When the library at Alexandria burned it was said to be an "international catastrophe." When the most significant humanities project of our time was dismantled in court, the scholars, archivists, and librarians who'd had a hand in its undoing breathed a sigh of relief, for they believed, at the time, that they had narrowly averted disaster.
  2. Walters, W.H.; Linvill, A.C.: Bibliographic index coverage of open-access journals in six subject areas (2011) 0.02
    0.024869673 = product of:
      0.049739346 = sum of:
        0.032993436 = weight(_text_:search in 4635) [ClassicSimilarity], result of:
          0.032993436 = score(doc=4635,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 4635, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4635)
        0.01674591 = product of:
          0.03349182 = sum of:
            0.03349182 = weight(_text_:22 in 4635) [ClassicSimilarity], result of:
              0.03349182 = score(doc=4635,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.19345059 = fieldWeight in 4635, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4635)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    We investigate the extent to which open-access (OA) journals and articles in biology, computer science, economics, history, medicine, and psychology are indexed in each of 11 bibliographic databases. We also look for variations in index coverage by journal subject, journal size, publisher type, publisher size, date of first OA issue, region of publication, language of publication, publication fee, and citation impact factor. Two databases, Biological Abstracts and PubMed, provide very good coverage of the OA journal literature, indexing 60 to 63% of all OA articles in their disciplines. Five databases provide moderately good coverage (22-41%), and four provide relatively poor coverage (0-12%). OA articles in biology journals, English-only journals, high-impact journals, and journals that charge publication fees of $1,000 or more are especially likely to be indexed. Conversely, articles from OA publishers in Africa, Asia, or Central/South America are especially unlikely to be indexed. Four of the 11 databases index commercially published articles at a substantially higher rate than articles published by universities, scholarly societies, nonprofit publishers, or governments. Finally, three databases-EBSCO Academic Search Complete, ProQuest Research Library, and Wilson OmniFile-provide less comprehensive coverage of OA articles than of articles in comparable subscription journals.
  3. Nejdl, W.; Risse, T.: Herausforderungen für die nationale, regionale und thematische Webarchivierung und deren Nutzung (2015) 0.02
    0.017452994 = product of:
      0.06981198 = sum of:
        0.06981198 = weight(_text_:web in 2531) [ClassicSimilarity], result of:
          0.06981198 = score(doc=2531,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43268442 = fieldWeight in 2531, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2531)
      0.25 = coord(1/4)
    
    Abstract
    Das World Wide Web ist als weltweites Informations- und Kommunikationsmedium etabliert. Neue Technologien erweitern regelmäßig die Nutzungsformen und erlauben es auch unerfahrenen Nutzern, Inhalte zu publizieren oder an Diskussionen teilzunehmen. Daher wird das Web auch als eine gute Dokumentation der heutigen Gesellschaft angesehen. Aufgrund seiner Dynamik sind die Inhalte des Web vergänglich und neue Technologien und Nutzungsformen stellen regelmäßig neue Herausforderungen an die Sammlung von Webinhalten für die Webarchivierung. Dominierten in den Anfangstagen der Webarchivierung noch statische Seiten, so hat man es heute häufig mit dynamisch generierten Inhalten zu tun, die Informationen aus verschiedenen Quellen integrieren. Neben dem klassischen domainorientieren Webharvesting kann auch ein steigendes Interesse aus verschiedenen Forschungsdisziplinen an thematischen Webkollektionen und deren Nutzung und Exploration beobachtet werden. In diesem Artikel werden einige Herausforderungen und Lösungsansätze für die Sammlung von thematischen und dynamischen Inhalten aus dem Web und den sozialen Medien vorgestellt. Des Weiteren werden aktuelle Probleme der wissenschaftlichen Nutzung diskutiert und gezeigt, wie Webarchive und andere temporale Kollektionen besser durchsucht werden können.
  4. Niggemann, E.: Im weiten endlosen Meer des World Wide Web : vom Sammelauftrag der Gedächtnisorganisationen (2015) 0.02
    0.015114739 = product of:
      0.060458954 = sum of:
        0.060458954 = weight(_text_:web in 2529) [ClassicSimilarity], result of:
          0.060458954 = score(doc=2529,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.37471575 = fieldWeight in 2529, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2529)
      0.25 = coord(1/4)
    
    Abstract
    Seit 2006 gehört zum gesetzlichen Auftrag der Deutschen Nationalbibliothek auch das Sammeln von Medienwerken, die in unkörperlicher Form der Öffentlichkeit zugänglich gemacht werden. Dieser Auftrag lässt Interpretationen zu, und in der Tat ist nicht nur der Umgang mit diesen Werken, sondern bereits die Definition von Sammelkriterien Inhalt von Projekten und Überlegungen. Für das Sammeln von Werken, die Bestandteil des World Wide Web sind, müssen Grenzen festgelegt werden - das Web ist zu weit und scheint endlos. Auch für die notwendigen Kooperationen mit und Abgrenzungen zu anderen Gedächtnisorganisationen sind Kriterien und Definitionen erforderlich. Der vorliegende Beitrag zum Thema Webharvesting versteht sich als Angebot zum Gedankenaustausch über Sammlungsabstimmungen national wie international, innerhalb der bibliothekarischen wie auch in der gesamten Kulturwelt aus Sicht der Deutschen Nationalbibliothek.
  5. Ernst, W.: Memorisierung des »Web« : von der emphatischen Archivierung zur Zwischenarchivierung der Gegenwart (2015) 0.01
    0.012341131 = product of:
      0.049364526 = sum of:
        0.049364526 = weight(_text_:web in 2530) [ClassicSimilarity], result of:
          0.049364526 = score(doc=2530,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3059541 = fieldWeight in 2530, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2530)
      0.25 = coord(1/4)
    
    Abstract
    Seitdem Texte nicht mehr nur auf gedruckten Buchstaben, sondern in elektronischer Form auf dem flüchtigen alphanumerischen Code beruhen, wandeln sich auch die Risiken und Chancen des Sammlungsauftrages von Bibliotheken. Der ungeheure Zuwachs an nahezu unverzüglichem Informationszugang im Internet geht mit einer teilweise bewusst in Kauf genommenen Fokussierung auf erweiterte Gegenwart zuungunsten nachhaltiger Speicherung einher. Wo an die Stelle der Gesamterfassung von Publikationen der Gegenwart notwendig die stichprobenhafte Archivierung in Intervallen tritt, zeichnet sich ein neues Verhältnis von Zeit und kulturellem Gedächtnis ab. In der Zeitökonomie dynamischer Zwischenarchivierung obliegt es den Bibliotheken, sich diesem Trend zu öffnen und gleichzeitig zu widerstehen. Es bedarf einerseits der institutionell gesicherten Orte, nicht nur die Nutzeroberflächen des Web, sondern auch ihre Bedingungen (Quellcode bis hin zur Emulation von Computerhardware) für künftige Kulturkritik nachvollziehbar zu bewahren; andererseits gilt es mit neuen Formen der algorithmischen Erschließung solcher Big Data zu experimentieren.
  6. Wolchover, N.: Wie ein Aufsehen erregender Beweis kaum Beachtung fand (2017) 0.01
    0.011841145 = product of:
      0.04736458 = sum of:
        0.04736458 = product of:
          0.09472916 = sum of:
            0.09472916 = weight(_text_:22 in 3582) [ClassicSimilarity], result of:
              0.09472916 = score(doc=3582,freq=4.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.54716086 = fieldWeight in 3582, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3582)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 4.2017 10:42:05
    22. 4.2017 10:48:38
  7. Kousha, K.; Thelwall, M.: ¬An automatic method for assessing the teaching impact of books from online academic syllabi (2016) 0.01
    0.011664942 = product of:
      0.046659768 = sum of:
        0.046659768 = weight(_text_:search in 3226) [ClassicSimilarity], result of:
          0.046659768 = score(doc=3226,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 3226, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3226)
      0.25 = coord(1/4)
    
    Abstract
    Scholars writing books that are widely used to support teaching in higher education may be undervalued because of a lack of evidence of teaching value. Although sales data may give credible evidence for textbooks, these data may poorly reflect educational uses of other types of books. As an alternative, this article proposes a method to search automatically for mentions of books in online academic course syllabi based on Bing searches for syllabi mentioning a given book, filtering out false matches through an extensive set of rules. The method had an accuracy of over 90% based on manual checks of a sample of 2,600 results from the initial Bing searches. Over one third of about 14,000 monographs checked had one or more academic syllabus mention, with more in the arts and humanities (56%) and social sciences (52%). Low but significant correlations between syllabus mentions and citations across most fields, except the social sciences, suggest that books tend to have different levels of impact for teaching and research. In conclusion, the automatic syllabus search method gives a new way to estimate the educational utility of books in a way that sales data and citation counts cannot.
  8. Paal, S.; Eickeler, S.: Automatisierung vom Scan bis zum elektronischen Lesesaal (2011) 0.01
    0.010180915 = product of:
      0.04072366 = sum of:
        0.04072366 = weight(_text_:web in 4897) [ClassicSimilarity], result of:
          0.04072366 = score(doc=4897,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 4897, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4897)
      0.25 = coord(1/4)
    
    Abstract
    Gedruckte Medien, wie Bücher, Zeitschriften und Zeitungen sind kulturhistorisch und marktwirtschaftlich wichtige Informationsquellen auf dem Weg zur Wissensgesellschaft. Um sie elektronisch verfügbar zu machen, um sie bereitzustellen und zu vermarkten, werden vorhandene Dokumente digitalisiert, mit besonderen Analyseverfahren inhaltlich erschlossen und über Web-Portale zur Verfügung gestellt. Es ist jedoch eine besondere Herausforderung, sicher und schnell auf zentral verwaltete Dokumentbestände über das Internet zuzugreifen. Beim Abruf von qualitativ hochwertigen Digitalisaten werden große Datenmengen übertragen. Dadurch steigt der Bandbreitenbedarf und die Zugriffszeiten verlängern sich. Außerdem unterliegen urheberrechtlich geschützte Dokumente besonderen gesetzlichen Beschränkungen. Für die wirtschaftliche Verwertung von digitalisierten Druckmedien ist daher neben der inhaltlichen Erschließung auch der Einsatz von geeigneten Leseanwendungen notwendig. Das Fraunhofer-Institut Intelligente Analyse- und Informationssysteme (IAIS) entwickelt Techniken,um diese Herausforderungen zu meistern.
  9. Steinke, T.: Webarchivierung als internationale Aufgabe (2015) 0.01
    0.010180915 = product of:
      0.04072366 = sum of:
        0.04072366 = weight(_text_:web in 2526) [ClassicSimilarity], result of:
          0.04072366 = score(doc=2526,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 2526, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2526)
      0.25 = coord(1/4)
    
    Abstract
    Das Web selbst ist international, und daher kann eine umfassende Webarchivierung nur in internationaler Zusammenarbeit gelingen. Eine breite Webarchivierung erfolgt vor allem durch die US-amerikanische Organisation Internet Archive und durch Nationalbibliotheken auf nationaler Ebene. Der Artikel stellt einige dieser Webarchive vor. Eine übergreifende Zusammenarbeit sowohl auf technischer als auch organisatorischer Ebene findet im International Internet Preservation Consortium (IIPC) statt. In Arbeitsgruppen und bei Kongressen arbeiten im IIPC Webarchive an Software-Werkzeugen und der Organisation übergreifender Sammlungen. Auch im Bereich der Standardisierung gibt es eine internationale Zusammenarbeit bei der Etablierung einheitlicher Archivformate und gemeinsamer Indikatoren für Statistiken in Webarchiven.
  10. Loos, A.: ¬Die Million ist geknackt (2015) 0.01
    0.010047545 = product of:
      0.04019018 = sum of:
        0.04019018 = product of:
          0.08038036 = sum of:
            0.08038036 = weight(_text_:22 in 4208) [ClassicSimilarity], result of:
              0.08038036 = score(doc=4208,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.46428138 = fieldWeight in 4208, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4208)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    7. 4.2015 17:22:03
  11. Rodrigues, R.S.; Abadal, E.: Scientific journals in Brazil and Spain : alternative publishing models (2014) 0.01
    0.008726497 = product of:
      0.03490599 = sum of:
        0.03490599 = weight(_text_:web in 1504) [ClassicSimilarity], result of:
          0.03490599 = score(doc=1504,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 1504, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=1504)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes high-quality journals in Brazil and Spain, with an emphasis on the distribution models used. It presents the general characteristics (age, type of publisher, and theme) and analyzes the distribution model by studying the type of format (print or digital), the type of access (open access or subscription), and the technology platform used. The 549 journals analyzed (249 in Brazil and 300 in Spain) are included in the 2011 Web of Science (WoS) and Scopus databases. Data on each journal were collected directly from their websites between March and October 2012. Brazil has a fully open access distribution model (97%) in which few journals require payment by authors thanks to cultural, financial, operational, and technological support provided by public agencies. In Spain, open access journals account for 55% of the total and have also received support from public agencies, although to a lesser extent. These results show that there are systems support of open access in scientific journals other than the "author pays" model advocated by the Finch report for the United Kingdom.
  12. Hu, B.; Dong, X.; Zhang, C.; Bowman, T.D.; Ding, Y.; Milojevic, S.; Ni, C.; Yan, E.; Larivière, V.: ¬A lead-lag analysis of the topic evolution patterns for preprints and publications (2015) 0.01
    0.008726497 = product of:
      0.03490599 = sum of:
        0.03490599 = weight(_text_:web in 2337) [ClassicSimilarity], result of:
          0.03490599 = score(doc=2337,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 2337, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2337)
      0.25 = coord(1/4)
    
    Abstract
    This study applied LDA (latent Dirichlet allocation) and regression analysis to conduct a lead-lag analysis to identify different topic evolution patterns between preprints and papers from arXiv and the Web of Science (WoS) in astrophysics over the last 20 years (1992-2011). Fifty topics in arXiv and WoS were generated using an LDA algorithm and then regression models were used to explain 4 types of topic growth patterns. Based on the slopes of the fitted equation curves, the paper redefines the topic trends and popularity. Results show that arXiv and WoS share similar topics in a given domain, but differ in evolution trends. Topics in WoS lose their popularity much earlier and their durations of popularity are shorter than those in arXiv. This work demonstrates that open access preprints have stronger growth tendency as compared to traditional printed publications.
  13. Lorenz, D.: Occupy Publishing! : Wie veröffentlichen wir in Zukunft? (2012) 0.01
    0.008726497 = product of:
      0.03490599 = sum of:
        0.03490599 = weight(_text_:web in 5596) [ClassicSimilarity], result of:
          0.03490599 = score(doc=5596,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 5596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5596)
      0.25 = coord(1/4)
    
    Abstract
    "Über 1000 Mathematikerinnen und Mathematiker aus aller Welt erklären öffentlich ihren Boykott des Elsevier-Verlages auf der Webseite http://thecostofknowledge.com, und unter dem gleichen Namen veröffentlichen 34 namhafte Mathematiker einen offenen Brief, in dem sie in klarer Sprache den Verlag kritisieren (siehe auch die deutsche Übersetzung des offenen Briefes ab Seite 16 dieses Heftes): "What all the signatories do agree on is that Elsevier is an exemplar of everything that is wrong with the current system of commercial publication of mathematics journals, and we will no longer acquiesce to Elsevier's harvesting of the value of our and our colleagues' work." Wie konnte es dazu kommen? Die Geschichte beginnt wahrscheinlich schon dent Ende der 90er Jahre von Rob Kirby, doch mit Hilfe des Web 2.0 hat vor langer Zeit, zuminmit einem offenen Brief sie in den vergangenen Monaten erstaunlich an Fahrt gewonnen. Der Beitrag bietet eine kurze Chronologie der Ereignisse."
  14. Abad-García, M.-F.; González-Teruel, A.; González-Llinares, J.: Effectiveness of OpenAIRE, BASE, Recolecta, and Google Scholar at finding spanish articles in repositories (2018) 0.01
    0.008248359 = product of:
      0.032993436 = sum of:
        0.032993436 = weight(_text_:search in 4185) [ClassicSimilarity], result of:
          0.032993436 = score(doc=4185,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 4185, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4185)
      0.25 = coord(1/4)
    
    Abstract
    This paper explores the usefulness of OpenAIRE, BASE, Recolecta, and Google Scholar (GS) for evaluating open access (OA) policies that demand a deposit in a repository. A case study was designed focusing on 762 financed articles with a project of FIS-2012 of the Instituto de Salud Carlos III, the Spanish national health service's main management body for health research. Its finance is therefore subject to the Spanish Government OA mandate. A search was carried out for full-text OA copies of the 762 articles using the four tools being evaluated and with identification of the repository housing these items. Of the 762 articles concerned, 510 OA copies were found of 353 unique articles (46.3%) in 68 repositories. OA copies were found of 81.9% of the articles in PubMed Central and copies of 49.5% of the articles in an institutional repository (IR). BASE and GS identified 93.5% of the articles and OpenAIRE 86.7%. Recolecta identified just 62.2% of the articles deposited in a Spanish IR. BASE achieved the greatest success, by locating copies deposited in IR, while GS found those deposited in disciplinary repositories. None of the tools identified copies of all the articles, so they need to be used in a complementary way when evaluating OA policies.
  15. Shen, W.; Stempfhuber, M.: Embedding discussion in online publications (2013) 0.01
    0.008227421 = product of:
      0.032909684 = sum of:
        0.032909684 = weight(_text_:web in 940) [ClassicSimilarity], result of:
          0.032909684 = score(doc=940,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2039694 = fieldWeight in 940, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=940)
      0.25 = coord(1/4)
    
    Abstract
    Grey Literature and Open Access publications have the potential to become the basis of new types of scientific publications, in which scientific discourse and collaboration can play a central role in the dissemination of knowledge, due to their machine-readable format and electronic availability. With advances in cyber science and e-(Social) Science, an increasing number of scientific publications are required to be shared with different group members or even within communities in virtual reading environments. However, the most often used format for publishing articles on the web is still the Adobe PDF format, which limits the extent to which readers of an article can interact with online content and within their browser environment. This not only separates the formal communication - the article itself - from the informal communication about a publication - the discussion about the article - but also fails to link the different threads of communications which might appear in parallel at different locations in the scientific community as a whole. Analysis of around 30 web sites where different ways for presenting formal and informal communications were conducted shows in the identification of several prototypes of media combinations which were then evaluated against human factor aspects (distance between related information, arrangement of related information etc). Based on this evaluation we concluded that at the time of analysis, no model exist for directly integrating formal and informal communication to a single media, allowing readers of publications to directly discuss within the publication, e.g. to extend the publication with their input directly at the paragraph they wanted to comment. Therefore, a new publishing medium is necessary to fulfil the gap between the formal and informal communication, facilitating and engaging academic readers' active participating in the online scientific discourse. We have developed an online discussion service, which allows interactive features for annotation directly available at the point in the publication to which the comment refers to. Besides the exchange of ideas and the stimulation of discourse across portals and communities we approach at the same time to create a new basis for research in scientific discourse, networking and collaboration. This is supported by linking from the individual article to other publications or information items in digital libraries.
  16. Laakso, M.; Björk, B.-C.: Delayed open access : an overlooked high-impact category of openly available scientific literature (2013) 0.01
    0.0072720814 = product of:
      0.029088326 = sum of:
        0.029088326 = weight(_text_:web in 944) [ClassicSimilarity], result of:
          0.029088326 = score(doc=944,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 944, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=944)
      0.25 = coord(1/4)
    
    Abstract
    Delayed open access (OA) refers to scholarly articles in subscription journals made available openly on the web directly through the publisher at the expiry of a set embargo period. Although a substantial number of journals have practiced delayed OA since they started publishing e-versions, empirical studies concerning OA have often overlooked this body of literature. This study provides comprehensive quantitative measurements by identifying delayed OA journals and collecting data concerning their publication volumes, embargo lengths, and citation rates. Altogether, 492 journals were identified, publishing a combined total of 111,312 articles in 2011; 77.8% of these articles were made OA within 12 months from publication, with 85.4% becoming available within 24 months. A journal impact factor analysis revealed that delayed OA journals have citation rates on average twice as high as those of closed subscription journals and three times as high as immediate OA journals. Overall, the results demonstrate that delayed OA journals constitute an important segment of the openly available scholarly journal literature, both by their sheer article volume and by including a substantial proportion of high-impact journals.
  17. Björk, B.-C.; Laakso, M.; Welling, P.; Paetau, P.: Anatomy of green open access (2014) 0.01
    0.0072720814 = product of:
      0.029088326 = sum of:
        0.029088326 = weight(_text_:web in 1194) [ClassicSimilarity], result of:
          0.029088326 = score(doc=1194,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 1194, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1194)
      0.25 = coord(1/4)
    
    Abstract
    Open access (OA) is free, unrestricted access to electronic versions of scholarly publications. For peer-reviewed journal articles, there are two main routes to OA: publishing in OA journals (gold OA) or archiving of article copies or manuscripts at other web locations (green OA). This study focuses on summarizing and extending current knowledge about green OA. A synthesis of previous studies indicates that green OA coverage of all published journal articles is approximately 12%, with substantial disciplinary variation. Typically, green OA copies become available after considerable time delays, partly caused by publisher-imposed embargo periods, and partly by author tendencies to archive manuscripts only periodically. Although green OA copies should ideally be archived in proper repositories, a large share is stored on home pages and similar locations, with no assurance of long-term preservation. Often such locations contain exact copies of published articles, which may infringe on the publisher's exclusive rights. The technical foundation for green OA uploading is becoming increasingly solid largely due to the rapid increase in the number of institutional repositories. The number of articles within the scope of OA mandates, which strongly influence the self-archival rate of articles, is nevertheless still low.
  18. Vincent-Lamarre, P.; Boivin, J.; Gargouri, Y.; Larivière, V.; Harnad, S.: Estimating open access mandate effectiveness : the MELIBEA score (2016) 0.01
    0.0072720814 = product of:
      0.029088326 = sum of:
        0.029088326 = weight(_text_:web in 3162) [ClassicSimilarity], result of:
          0.029088326 = score(doc=3162,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 3162, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3162)
      0.25 = coord(1/4)
    
    Abstract
    MELIBEA is a directory of institutional open-access policies for research output that uses a composite formula with eight weighted conditions to estimate the "strength" of open access (OA) mandates (registered in ROARMAP). We analyzed total Web of Science-(WoS)-indexed publication output in years 2011-2013 for 67 institutions in which OA was mandated to estimate the mandates' effectiveness: How well did the MELIBEA score and its individual conditions predict what percentage of the WoS-indexed articles is actually deposited in each institution's OA repository, and when? We found a small but significant positive correlation (0.18) between the MELIBEA "strength" score and deposit percentage. For three of the eight MELIBEA conditions (deposit timing, internal use, and opt-outs), one value of each was strongly associated with deposit percentage or latency ([a] immediate deposit required; [b] deposit required for performance evaluation; [c] unconditional opt-out allowed for the OA requirement but no opt-out for deposit requirement). When we updated the initial values and weights of the MELIBEA formula to reflect the empirical association we had found, the score's predictive power for mandate effectiveness doubled (0.36). There are not yet enough OA mandates to test further mandate conditions that might contribute to mandate effectiveness, but the present findings already suggest that it would be productive for existing and future mandates to adopt the three identified conditions so as to maximize their effectiveness, and thereby the growth of OA.
  19. Zahedi, Z.; Costas, R.; Wouters, P.: Mendeley readership as a filtering tool to identify highly cited publications (2017) 0.01
    0.0072720814 = product of:
      0.029088326 = sum of:
        0.029088326 = weight(_text_:web in 3837) [ClassicSimilarity], result of:
          0.029088326 = score(doc=3837,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 3837, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3837)
      0.25 = coord(1/4)
    
    Abstract
    This study presents a large-scale analysis of the distribution and presence of Mendeley readership scores over time and across disciplines. We study whether Mendeley readership scores (RS) can identify highly cited publications more effectively than journal citation scores (JCS). Web of Science (WoS) publications with digital object identifiers (DOIs) published during the period 2004-2013 and across five major scientific fields were analyzed. The main result of this study shows that RS are more effective (in terms of precision/recall values) than JCS to identify highly cited publications across all fields of science and publication years. The findings also show that 86.5% of all the publications are covered by Mendeley and have at least one reader. Also, the share of publications with Mendeley RS is increasing from 84% in 2004 to 89% in 2009, and decreasing from 88% in 2010 to 82% in 2013. However, it is noted that publications from 2010 onwards exhibit on average a higher density of readership versus citation scores. This indicates that compared to citation scores, RS are more prevalent for recent publications and hence they could work as an early indicator of research impact. These findings highlight the potential and value of Mendeley as a tool for scientometric purposes and particularly as a relevant tool to identify highly cited publications.
  20. Zheng, H.; Aung, H.H.; Erdt, M.; Peng, T.-Q.; Raamkumar, A.S.; Theng, Y.-L.: Social media presence of scholarly journals (2019) 0.01
    0.0072720814 = product of:
      0.029088326 = sum of:
        0.029088326 = weight(_text_:web in 4987) [ClassicSimilarity], result of:
          0.029088326 = score(doc=4987,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 4987, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4987)
      0.25 = coord(1/4)
    
    Abstract
    Recently, social media has become a potentially new way for scholarly journals to disseminate and evaluate research outputs. Scholarly journals have started promoting their research articles to a wide range of audiences via social media platforms. This article aims to investigate the social media presence of scholarly journals across disciplines. We extracted journals from Web of Science and searched for the social media presence of these journals on Facebook and Twitter. Relevant metrics and content relating to the journals' social media accounts were also crawled for data analysis. From our results, the social media presence of scholarly journals lies between 7.1% and 14.2% across disciplines; and it has shown a steady increase in the last decade. The popularity of scholarly journals on social media is distinct across disciplines. Further, we investigated whether social media metrics of journals can predict the Journal Impact Factor (JIF). We found that the number of followers and disciplines have significant effects on the JIF. In addition, a word co-occurrence network analysis was also conducted to identify popular topics discussed by scholarly journals on social media platforms. Finally, we highlight challenges and issues faced in this study and discuss future research directions.