Search (8 results, page 1 of 1)

  • × author_ss:"Cabanac, G."
  1. Cabanac, G.; Preuss, T.: Capitalizing on order effects in the bids of peer-reviewed conferences to secure reviews by expert referees (2013) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 619) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=619,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 619, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=619)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Peer review supports scientific conferences in selecting high-quality papers for publication. Referees are expected to evaluate submissions equitably according to objective criteria (e.g., originality of the contribution, soundness of the theory, validity of the experiments). We argue that the submission date of papers is a subjective factor playing a role in the way they are evaluated. Indeed, program committee (PC) chairs and referees process submission lists that are usually sorted by paperIDs. This order conveys chronological information, as papers are numbered sequentially upon reception. We show that order effects lead to unconscious favoring of early-submitted papers to the detriment of later-submitted papers. Our point is supported by a study of 42 peer-reviewed conferences in Computer Science showing a decrease in the number of bids placed on submissions with higher paperIDs. It is advised to counterbalance order effects during the bidding phase of peer review by promoting the submissions with fewer bids to potential referees. This manipulation intends to better share bids out among submissions in order to attract qualified referees for all submissions. This would secure reviews from confident referees, who are keen on voicing sharp opinions and recommendations (acceptance or rejection) about submissions. This work contributes to the integrity of peer review, which is mandatory to maintain public trust in science.
    Type
    a
  2. Cabanac, G.: Bibliogifts in LibGen? : a study of a text-sharing platform driven by biblioleaks and crowdsourcing (2016) 0.00
    0.0018909799 = product of:
      0.0037819599 = sum of:
        0.0037819599 = product of:
          0.0075639198 = sum of:
            0.0075639198 = weight(_text_:a in 2850) [ClassicSimilarity], result of:
              0.0075639198 = score(doc=2850,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.14243183 = fieldWeight in 2850, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2850)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Research articles disseminate the knowledge produced by the scientific community. Access to this literature is crucial for researchers and the general public. Apparently, "bibliogifts" are available online for free from text-sharing platforms. However, little is known about such platforms. What is the size of the underlying digital libraries? What are the topics covered? Where do these documents originally come from? This article reports on a study of the Library Genesis platform (LibGen). The 25 million documents (42 terabytes) it hosts and distributes for free are mostly research articles, textbooks, and books in English. The article collection stems from isolated, but massive, article uploads (71%) in line with a "biblioleaks" scenario, as well as from daily crowdsourcing (29%) by worldwide users of platforms such as Reddit Scholar and Sci-Hub. By relating the DOIs registered at CrossRef and those cached at LibGen, this study reveals that 36% of all DOI articles are available for free at LibGen. This figure is even higher (68%) for three major publishers: Elsevier, Springer, and Wiley. More research is needed to understand to what extent researchers and the general public have recourse to such text-sharing platforms and why.
    Type
    a
  3. Cabanac, G.; Hartley, J.: Issues of work-life balance among JASIST authors and editors (2013) 0.00
    0.001757696 = product of:
      0.003515392 = sum of:
        0.003515392 = product of:
          0.007030784 = sum of:
            0.007030784 = weight(_text_:a in 996) [ClassicSimilarity], result of:
              0.007030784 = score(doc=996,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.13239266 = fieldWeight in 996, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=996)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many dedicated scientists reject the concept of maintaining a "work-life balance." They argue that work is actually a huge part of life. In the mind-set of these scientists, weekdays and weekends are equally appropriate for working on their research. Although we all have encountered such people, we may wonder how widespread this condition is with other scientists in our field. This brief communication probes work-life balance issues among JASIST authors and editors. We collected and examined the publication histories for 1,533 of the 2,402 articles published in JASIST between 2001 and 2012. Although there is no rush to submit, revise, or accept papers, we found that 11% of these events happened during weekends and that this trend has been increasing since 2005. Our findings suggest that working during the weekend may be one of the ways that scientists cope with the highly demanding era of "publish or perish." We hope that our findings will raise an awareness of the steady increases in work among scientists before it affects our work-life balance even more.
    Type
    a
  4. Cabanac, G.; Labbé, C.: Prevalence of nonsensical algorithmically generated papers in the scientific literature (2021) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 410) [ClassicSimilarity], result of:
              0.006765375 = score(doc=410,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 410, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=410)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In 2014 leading publishers withdrew more than 120 nonsensical publications automatically generated with the SCIgen program. Casual observations suggested that similar problematic papers are still published and sold, without follow-up retractions. No systematic screening has been performed and the prevalence of such nonsensical publications in the scientific literature is unknown. Our contribution is 2-fold. First, we designed a detector that combs the scientific literature for grammar-based computer-generated papers. Applied to SCIgen, it has a 83.6% precision. Second, we performed a scientometric study of the 243 detected SCIgen-papers from 19 publishers. We estimate the prevalence of SCIgen-papers to be 75 per million papers in Information and Computing Sciences. Only 19% of the 243 problematic papers were dealt with: formal retraction (12) or silent removal (34). Publishers still serve and sometimes sell the remaining 197 papers without any caveat. We found evidence of citation manipulation via edited SCIgen bibliographies. This work reveals metric gaming up to the point of absurdity: fraudsters publish nonsensical algorithmically generated papers featuring genuine references. It stresses the need to screen papers for nonsense before peer-review and chase citation manipulation in published papers. Overall, this is yet another illustration of the harmful effects of the pressure to publish or perish.
    Type
    a
  5. Cabanac, G.; Chevalier, M.; Chrisment, C.; Julien, C.: Social validation of collective annotations : definition and experiment (2009) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 3415) [ClassicSimilarity], result of:
              0.005740611 = score(doc=3415,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 3415, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3415)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    People taking part in argumentative debates through collective annotations face a highly cognitive task when trying to estimate the group's global opinion. In order to reduce this effort, we propose in this paper to model such debates prior to evaluating their social validation. Computing the degree of global confirmation (or refutation) enables the identification of consensual (or controversial) debates. Readers as well as prominent information systems may thus benefit from this information. The accuracy of the social validation measure was tested through an online study conducted with 121 participants. We compared their human perception of consensus in argumentative debates with the results of the three proposed social validation algorithms. Their efficiency in synthesizing opinions was demonstrated by the fact that they achieved an accuracy of up to 84%.
    Type
    a
  6. Cabanac, G.: Shaping the landscape of research in information systems from the perspective of editorial boards : a scientometric study of 77 leading journals (2012) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 242) [ClassicSimilarity], result of:
              0.005740611 = score(doc=242,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 242, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=242)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  7. Cabanac, G.; Hubert, G.; Hartley, J.: Solo versus collaborative writing : discrepancies in the use of tables and graphs in academic articles (2014) 0.00
    0.0014351527 = product of:
      0.0028703054 = sum of:
        0.0028703054 = product of:
          0.005740611 = sum of:
            0.005740611 = weight(_text_:a in 1242) [ClassicSimilarity], result of:
              0.005740611 = score(doc=1242,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10809815 = fieldWeight in 1242, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1242)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The number of authors collaborating to write scientific articles has been increasing steadily, and with this collaboration, other factors have also changed, such as the length of articles and the number of citations. However, little is known about potential discrepancies in the use of tables and graphs between single and collaborating authors. In this article, we ask whether multiauthor articles contain more tables and graphs than single-author articles, and we studied 5,180 recent articles published in six science and social sciences journals. We found that pairs and multiple authors used significantly more tables and graphs than single authors. Such findings indicate that there is a greater emphasis on the role of tables and graphs in collaborative writing, and we discuss some of the possible causes and implications of these findings.
    Type
    a
  8. Hartley, J.; Cabanac, G.; Kozak, M.; Hubert, G.: Research on tables and graphs in academic articles : pitfalls and promises (2015) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 1637) [ClassicSimilarity], result of:
              0.0054123 = score(doc=1637,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 1637, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1637)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a