Search (2 results, page 1 of 1)

  • × author_ss:"Cabanac, G."
  • × theme_ss:"Elektronisches Publizieren"
  1. Cabanac, G.: Bibliogifts in LibGen? : a study of a text-sharing platform driven by biblioleaks and crowdsourcing (2016) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 2850) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=2850,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 2850, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2850)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Research articles disseminate the knowledge produced by the scientific community. Access to this literature is crucial for researchers and the general public. Apparently, "bibliogifts" are available online for free from text-sharing platforms. However, little is known about such platforms. What is the size of the underlying digital libraries? What are the topics covered? Where do these documents originally come from? This article reports on a study of the Library Genesis platform (LibGen). The 25 million documents (42 terabytes) it hosts and distributes for free are mostly research articles, textbooks, and books in English. The article collection stems from isolated, but massive, article uploads (71%) in line with a "biblioleaks" scenario, as well as from daily crowdsourcing (29%) by worldwide users of platforms such as Reddit Scholar and Sci-Hub. By relating the DOIs registered at CrossRef and those cached at LibGen, this study reveals that 36% of all DOI articles are available for free at LibGen. This figure is even higher (68%) for three major publishers: Elsevier, Springer, and Wiley. More research is needed to understand to what extent researchers and the general public have recourse to such text-sharing platforms and why.
    Type
    a
  2. Cabanac, G.; Labbé, C.: Prevalence of nonsensical algorithmically generated papers in the scientific literature (2021) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 410) [ClassicSimilarity], result of:
              0.006765375 = score(doc=410,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 410, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=410)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In 2014 leading publishers withdrew more than 120 nonsensical publications automatically generated with the SCIgen program. Casual observations suggested that similar problematic papers are still published and sold, without follow-up retractions. No systematic screening has been performed and the prevalence of such nonsensical publications in the scientific literature is unknown. Our contribution is 2-fold. First, we designed a detector that combs the scientific literature for grammar-based computer-generated papers. Applied to SCIgen, it has a 83.6% precision. Second, we performed a scientometric study of the 243 detected SCIgen-papers from 19 publishers. We estimate the prevalence of SCIgen-papers to be 75 per million papers in Information and Computing Sciences. Only 19% of the 243 problematic papers were dealt with: formal retraction (12) or silent removal (34). Publishers still serve and sometimes sell the remaining 197 papers without any caveat. We found evidence of citation manipulation via edited SCIgen bibliographies. This work reveals metric gaming up to the point of absurdity: fraudsters publish nonsensical algorithmically generated papers featuring genuine references. It stresses the need to screen papers for nonsense before peer-review and chase citation manipulation in published papers. Overall, this is yet another illustration of the harmful effects of the pressure to publish or perish.
    Type
    a