Search (2 results, page 1 of 1)

  • × author_ss:"Peters, I."
  • × theme_ss:"Informetrie"
  1. Schmitz, J.; Arning, U.; Peters, I.: handbuch.io : Handbuch CoScience / Messung von wissenschaftlichem Impact (2015) 0.01
    0.010510692 = product of:
      0.031532075 = sum of:
        0.031532075 = weight(_text_:im in 2189) [ClassicSimilarity], result of:
          0.031532075 = score(doc=2189,freq=2.0), product of:
            0.1442303 = queryWeight, product of:
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.051022716 = queryNorm
            0.2186231 = fieldWeight in 2189, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.8267863 = idf(docFreq=7115, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2189)
      0.33333334 = coord(1/3)
    
    Abstract
    Die Bewertung der Forschungs- und Publikationsleistung spielt in unterschiedlichen Kontexten im Wissenschaftssystem eine große Rolle, insbesondere weil Drittmittel knapp und mit Renommee verbundene Stellen wie Professuren rar sind. Neben der inhaltlichen und qualitativen Bewertung der wissenschaftlichen Leistung durch Peer Review, wird auch versucht, Publikationsleistungen von Wissenschaftlerinnen und Wissenschaftlern, Instituten oder Arbeitsgruppen zu quantifizieren. Diese "Vermessung" von Publikationen wird auch als Bibliometrie (engl. bibliometrics) oder Szientometrie (engl. scientometrics) bezeichnet. Entscheidend sind hierbei in erster Linie drei Kennzahlen: - Produktivität: Anzahl der Publikationen - Wirkung/Impact: Anzahl der Zitationen - Kooperationen: Anzahl der Artikel, die man gemeinsam mit anderen Autoren oder Institutionen publiziert. Der Zitierung kommt in der Wissenschaft eine besondere Bedeutung zu.
  2. Lemke, S.; Mazarakis, A.; Peters, I.: Conjoint analysis of researchers' hidden preferences for bibliometrics, altmetrics, and usage metrics (2021) 0.00
    0.0028845975 = product of:
      0.008653793 = sum of:
        0.008653793 = product of:
          0.025961377 = sum of:
            0.025961377 = weight(_text_:online in 247) [ClassicSimilarity], result of:
              0.025961377 = score(doc=247,freq=2.0), product of:
                0.1548489 = queryWeight, product of:
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.051022716 = queryNorm
                0.16765618 = fieldWeight in 247, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0349014 = idf(docFreq=5778, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=247)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Abstract
    The amount of annually published scholarly articles is growing steadily, as is the number of indicators through which impact of publications is measured. Little is known about how the increasing variety of available metrics affects researchers' processes of selecting literature to read. We conducted ranking experiments embedded into an online survey with 247 participating researchers, most from social sciences. Participants completed series of tasks in which they were asked to rank fictitious publications regarding their expected relevance, based on their scores regarding six prototypical metrics. Through applying logistic regression, cluster analysis, and manual coding of survey answers, we obtained detailed data on how prominent metrics for research impact influence our participants in decisions about which scientific articles to read. Survey answers revealed a combination of qualitative and quantitative characteristics that researchers consult when selecting literature, while regression analysis showed that among quantitative metrics, citation counts tend to be of highest concern, followed by Journal Impact Factors. Our results suggest a comparatively favorable view of many researchers on bibliometrics and widespread skepticism toward altmetrics. The findings underline the importance of equipping researchers with solid knowledge about specific metrics' limitations, as they seem to play significant roles in researchers' everyday relevance assessments.

Languages

Types