Search (5 results, page 1 of 1)

  • × author_ss:"Moed, H.F."
  • × theme_ss:"Informetrie"
  • × year_i:[2000 TO 2010}
  1. Moed, H.F.: Statistical relationships between downloads and citations at the level of individual documents within a single journal (2005) 0.00
    0.0024392908 = product of:
      0.0048785815 = sum of:
        0.0048785815 = product of:
          0.009757163 = sum of:
            0.009757163 = weight(_text_:a in 3882) [ClassicSimilarity], result of:
              0.009757163 = score(doc=3882,freq=26.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18373153 = fieldWeight in 3882, product of:
                  5.0990195 = tf(freq=26.0), with freq of:
                    26.0 = termFreq=26.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3882)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Statistical relationships between downloads from ScienceDirect of documents in Elsevier's electronic journal Tetrahedron Letters and citations to these documents recorded in journals processed by the Institute for Scientific Information/Thomson Scientific for the Science Citation Index (SCI) are examined. A synchronous approach revealed that downloads and citations show different patterns of obsolescence of the used materials. The former can be adequately described by a model consisting of the sum of two negative exponential functions, representing an ephemeral and a residual factor, whereas the decline phase of the latter conforms to a simple exponential function with a decay constant statistically similar to that of the downloads residual factor. A diachronous approach showed that, as a cohort of documents grows older, its download distribution becomes more and more skewed, and more statistically similar to its citation distribution. A method is proposed to estimate the effect of citations upon downloads using obsolescence patterns. It was found that during the first 3 months after an article is cited, its number of downloads increased 25% compared to what one would expect this number to be if the article had not been cited. Moreover, more downloads of citing documents led to more downloads of the cited article through the citation. An analysis of 1,190 papers in the journal during a time interval of 2 years after publication date revealed that there is about one citation for every 100 downloads. A Spearman rank correlation coefficient of 0.22 was found between the number of times an article was downloaded and its citation rate recorded in the SCI. When initial downloads-defined as downloads made during the first 3 months after publication-were discarded, the correlation raised to 0.35. However, both outcomes measure the joint effect of downloads upon citation and that of citation upon downloads. Correlating initial downloads to later citation counts, the correlation coefficient drops to 0.11. Findings suggest that initial downloads and citations relate to distinct phases in the process of collecting and processing relevant scientific information that eventually leads to the publication of a journal article.
    Type
    a
  2. Moed, H.F.: ¬The effect of "open access" on citation impact : an analysis of ArXiv's condensed matter section (2007) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 621) [ClassicSimilarity], result of:
              0.009567685 = score(doc=621,freq=16.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 621, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=621)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article statistically analyzes how the citation impact of articles deposited in the Condensed Matter section of the preprint server ArXiv (hosted by Cornell University), and subsequently published in a scientific journal, compares to that of articles in the same journal that were not deposited in the archive. Its principal aim is to further illustrate and roughly estimate the effect of two factors, early view and quality bias, on differences in citation impact between these two sets of papers, using citation data from Thomson Scientific's Web of Science. It presents estimates for a number of journals in the field of condensed matter physics. To discriminate between an open access effect and an early view effect, longitudinal citation data were analyzed covering a time period as long as 7 years. Quality bias was measured by calculating ArXiv citation impact differentials at the level of individual authors publishing in a journal, taking into account coauthorship. The analysis provided evidence of a strong quality bias and early view effect. Correcting for these effects, there is in a sample of six condensed matter physics journals studied in detail no sign of a general open access advantage of papers deposited in ArXiv. The study does provide evidence that ArXiv accelerates citation due to the fact that ArXiv makes papers available earlier rather than makes them freely available.
    Type
    a
  3. Glänzel, W.; Moed, H.F.: Journal impact measures in bibliometric research (2002) 0.00
    0.0023678814 = product of:
      0.0047357627 = sum of:
        0.0047357627 = product of:
          0.009471525 = sum of:
            0.009471525 = weight(_text_:a in 2904) [ClassicSimilarity], result of:
              0.009471525 = score(doc=2904,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17835285 = fieldWeight in 2904, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2904)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  4. Moed, H.F.; Luwel, M.; Nederhof, A.J.: Towards research performance in the humanities (2002) 0.00
    0.0023435948 = product of:
      0.0046871896 = sum of:
        0.0046871896 = product of:
          0.009374379 = sum of:
            0.009374379 = weight(_text_:a in 820) [ClassicSimilarity], result of:
              0.009374379 = score(doc=820,freq=24.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.17652355 = fieldWeight in 820, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=820)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper describes a general methodology for developing bibliometric performance indicators. Such a description provides a framework or paradigm for application-oriented research in the field of evaluative quantitative science and technology studies, particularly in the humanities and social sciences. It is based on our study of scholarly output in the field of Law at the four major universities in Flanders, the Dutch speaking part of Belgium. The study illustrates that bibliometrics is much more than conducting citation analyses based on the indexes produced by the Institute for Scientific Information (ISI), since citation data do not play a role in the study. Interaction with scholars in the fields under consideration and openness in the presentation of the quantitative outcomes are the basic features of the methodology. Bibliometrics should be used as an instrument to create a mirror. While not a direct reflection, this study provides a thorough analysis of how scholars in the humanities and social sciences structure their activities and their research output. This structure can be examined empirically from the point of view of its consistency and the degree of consensus among scholars. Relevant issues can be raised that are worth considering in more detail in followup studies, and conclusions from our empirical materials may illuminate such issues. We argue that the principal aim of the development and application of bibliometric indicators is to stimulate a debate among scholars in the field under investigation on the nature of scholarly quality, its principal dimensions, and operationalizations. This aim provides a criterion of "productivity" of the development process. We further contend that librarians are not infrequently requested to provide assistance in collecting data related to research performance assessments, and that the methodology described in the paper aims at offering a general framework for such activities, and can be used by librarians as a line of action whenever they become involved.
    Type
    a
  5. Reedijk, J.; Moed, H.F.: Is the impact of journal impact factors decreasing? (2008) 0.00
    0.0015127839 = product of:
      0.0030255679 = sum of:
        0.0030255679 = product of:
          0.0060511357 = sum of:
            0.0060511357 = weight(_text_:a in 1734) [ClassicSimilarity], result of:
              0.0060511357 = score(doc=1734,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11394546 = fieldWeight in 1734, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1734)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of this paper is to examine the effects of the use of the citation-based journal impact factor for evaluative purposes upon the behaviour of authors and editors. It seeks to give a critical examination of a number of claims as regards the manipulability of this indicator on the basis of an empirical analysis of publication and referencing practices of authors and journal editors Design/methodology/approach - The paper describes mechanisms that may affect the numerical values of journal impact factors. It also analyses general, "macro" patterns in large samples of journals in order to obtain indications of the extent to which such mechanisms are actually applied on a large scale. Finally it presents case studies of particular science journals in order to illustrate what their effects may be in individual cases. Findings - The paper shows that the commonly used journal impact factor can to some extent be relatively easily manipulated. It discusses several types of strategic editorial behaviour, and presents cases in which journal impact factors were - intentionally or otherwise - affected by particular editorial strategies. These findings lead to the conclusion that one must be most careful in interpreting and using journal impact factors, and that authors, editors and policy makers must be aware of their potential manipulability. They also show that some mechanisms occur as of yet rather infrequently, while for others it is most difficult if not impossible to assess empirically how often they are actually applied. If their frequency of occurrence increases, one should come to the conclusion that the impact of impact factors is decreasing. Originality/value - The paper systematically describes a number of claims about the manipulability of journal impact factors that are often based on "informal" or even anecdotal evidences and illustrates how these claims can be further examined in thorough empirical research of large data samples.
    Type
    a