Search (3 results, page 1 of 1)

  • × classification_ss:"AK 28100"
  1. Thelwall, M.: Web indicators for research evaluation : a practical guide (2016) 0.01
    0.011991608 = product of:
      0.035974823 = sum of:
        0.035974823 = weight(_text_:based in 3384) [ClassicSimilarity], result of:
          0.035974823 = score(doc=3384,freq=4.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.23539014 = fieldWeight in 3384, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3384)
      0.33333334 = coord(1/3)
    
    Abstract
    In recent years there has been an increasing demand for research evaluation within universities and other research-based organisations. In parallel, there has been an increasing recognition that traditional citation-based indicators are not able to reflect the societal impacts of research and are slow to appear. This has led to the creation of new indicators for different types of research impact as well as timelier indicators, mainly derived from the Web. These indicators have been called altmetrics, webometrics or just web metrics. This book describes and evaluates a range of web indicators for aspects of societal or scholarly impact, discusses the theory and practice of using and evaluating web indicators for research assessment and outlines practical strategies for obtaining many web indicators. In addition to describing impact indicators for traditional scholarly outputs, such as journal articles and monographs, it also covers indicators for videos, datasets, software and other non-standard scholarly outputs. The book describes strategies to analyse web indicators for individual publications as well as to compare the impacts of groups of publications. The practical part of the book includes descriptions of how to use the free software Webometric Analyst to gather and analyse web data. This book is written for information science undergraduate and Master?s students that are learning about alternative indicators or scientometrics as well as Ph.D. students and other researchers and practitioners using indicators to help assess research impact or to study scholarly communication.
  2. Gingras, Y.: Bibliometrics and research evaluation : uses and abuses (2016) 0.01
    0.008149769 = product of:
      0.024449307 = sum of:
        0.024449307 = product of:
          0.048898615 = sum of:
            0.048898615 = weight(_text_:training in 3805) [ClassicSimilarity], result of:
              0.048898615 = score(doc=3805,freq=2.0), product of:
                0.23690371 = queryWeight, product of:
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.050723847 = queryNorm
                0.20640713 = fieldWeight in 3805, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.67046 = idf(docFreq=1125, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3805)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The research evaluation market is booming. "Ranking," "metrics," "h-index," and "impact factors" are reigning buzzwords. Government and research administrators want to evaluate everything -- teachers, professors, training programs, universities -- using quantitative indicators. Among the tools used to measure "research excellence," bibliometrics -- aggregate data on publications and citations -- has become dominant. Bibliometrics is hailed as an "objective" measure of research quality, a quantitative measure more useful than "subjective" and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.
  3. Beyond bibliometrics : harnessing multidimensional indicators of scholarly intent (2014) 0.01
    0.006783478 = product of:
      0.020350434 = sum of:
        0.020350434 = weight(_text_:based in 3026) [ClassicSimilarity], result of:
          0.020350434 = score(doc=3026,freq=2.0), product of:
            0.15283063 = queryWeight, product of:
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.050723847 = queryNorm
            0.13315678 = fieldWeight in 3026, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0129938 = idf(docFreq=5906, maxDocs=44218)
              0.03125 = fieldNorm(doc=3026)
      0.33333334 = coord(1/3)
    
    Abstract
    Bibliometrics has moved well beyond the mere tracking of bibliographic citations. The web enables new ways to measure scholarly productivity and impact, making available tools and data that can reveal patterns of intellectual activity and impact that were previously invisible: mentions, acknowledgments, endorsements, downloads, recommendations, blog posts, tweets. This book describes recent theoretical and practical advances in metrics-based research, examining a variety of alternative metrics -- or "altmetrics" -- while also considering the ethical and cultural consequences of relying on metrics to assess the quality of scholarship. Once the domain of information scientists and mathematicians, bibliometrics is now a fast-growing, multidisciplinary field that ranges from webometrics to scientometrics to influmetrics. The contributors to Beyond Bibliometrics discuss the changing environment of scholarly publishing, the effects of open access and Web 2.0 on genres of discourse, novel analytic methods, and the emergence of next-generation metrics in a performance-conscious age.