Search (3 results, page 1 of 1)

  • × author_ss:"Gingras, Y."
  • × theme_ss:"Informetrie"
  1. Larivière, V.; Gingras, Y.; Archambault, E.: ¬The decline in the concentration of citations, 1900-2007 (2009) 0.00
    0.00477034 = product of:
      0.023851698 = sum of:
        0.023851698 = product of:
          0.047703397 = sum of:
            0.047703397 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
              0.047703397 = score(doc=2763,freq=4.0), product of:
                0.14530581 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.041494254 = queryNorm
                0.32829654 = fieldWeight in 2763, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2763)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    22. 3.2009 19:22:35
  2. Wallace, M.L.; Gingras, Y.; Duhon, R.: ¬A new approach for detecting scientific specialties from raw cocitation networks (2009) 0.00
    0.0036212734 = product of:
      0.018106367 = sum of:
        0.018106367 = product of:
          0.036212735 = sum of:
            0.036212735 = weight(_text_:l in 2709) [ClassicSimilarity], result of:
              0.036212735 = score(doc=2709,freq=2.0), product of:
                0.16492525 = queryWeight, product of:
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.041494254 = queryNorm
                0.2195706 = fieldWeight in 2709, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.9746525 = idf(docFreq=2257, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2709)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    We use a technique recently developed by V. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre (2008) to detect scientific specialties from author cocitation networks. This algorithm has distinct advantages over most previous methods used to obtain cocitation clusters since it avoids the use of similarity measures, relies entirely on the topology of the weighted network, and can be applied to relatively large networks. Most importantly, it requires no subjective interpretation of the cocitation data or of the communities found. Using two examples, we show that the resulting specialties are the smallest coherent groups of researchers (within a hierarchy of cluster sizes) and can thus be identified unambiguously. Furthermore, we confirm that these communities are indeed representative of what we know about the structure of a given scientific discipline and that as specialties, they can be accurately characterized by a few keywords (from the publication titles). We argue that this robust and efficient algorithm is particularly well-suited to cocitation networks and that the results generated can be of great use to researchers studying various facets of the structure and evolution of science.
  3. Gingras, Y.: Bibliometrics and research evaluation : uses and abuses (2016) 0.00
    0.0011319134 = product of:
      0.0056595667 = sum of:
        0.0056595667 = product of:
          0.011319133 = sum of:
            0.011319133 = weight(_text_:h in 3805) [ClassicSimilarity], result of:
              0.011319133 = score(doc=3805,freq=2.0), product of:
                0.10309036 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.041494254 = queryNorm
                0.10979818 = fieldWeight in 3805, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3805)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Abstract
    The research evaluation market is booming. "Ranking," "metrics," "h-index," and "impact factors" are reigning buzzwords. Government and research administrators want to evaluate everything -- teachers, professors, training programs, universities -- using quantitative indicators. Among the tools used to measure "research excellence," bibliometrics -- aggregate data on publications and citations -- has become dominant. Bibliometrics is hailed as an "objective" measure of research quality, a quantitative measure more useful than "subjective" and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.