Search (49 results, page 1 of 3)

  • × year_i:[2000 TO 2010}
  • × theme_ss:"Informetrie"
  1. Levitt, J.M.; Thelwall, M.: Citation levels and collaboration within library and information science (2009) 0.06
    0.06475571 = product of:
      0.12951142 = sum of:
        0.12951142 = sum of:
          0.08078188 = weight(_text_:policy in 2734) [ClassicSimilarity], result of:
            0.08078188 = score(doc=2734,freq=2.0), product of:
              0.2727254 = queryWeight, product of:
                5.361833 = idf(docFreq=563, maxDocs=44218)
                0.05086421 = queryNorm
              0.29620224 = fieldWeight in 2734, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.361833 = idf(docFreq=563, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2734)
          0.048729543 = weight(_text_:22 in 2734) [ClassicSimilarity], result of:
            0.048729543 = score(doc=2734,freq=4.0), product of:
              0.1781178 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05086421 = queryNorm
              0.27358043 = fieldWeight in 2734, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2734)
      0.5 = coord(1/2)
    
    Abstract
    Collaboration is a major research policy objective, but does it deliver higher quality research? This study uses citation analysis to examine the Web of Science (WoS) Information Science & Library Science subject category (IS&LS) to ascertain whether, in general, more highly cited articles are more highly collaborative than other articles. It consists of two investigations. The first investigation is a longitudinal comparison of the degree and proportion of collaboration in five strata of citation; it found that collaboration in the highest four citation strata (all in the most highly cited 22%) increased in unison over time, whereas collaboration in the lowest citation strata (un-cited articles) remained low and stable. Given that over 40% of the articles were un-cited, it seems important to take into account the differences found between un-cited articles and relatively highly cited articles when investigating collaboration in IS&LS. The second investigation compares collaboration for 35 influential information scientists; it found that their more highly cited articles on average were not more highly collaborative than their less highly cited articles. In summary, although collaborative research is conducive to high citation in general, collaboration has apparently not tended to be essential to the success of current and former elite information scientists.
    Date
    22. 3.2009 12:43:51
  2. Levitt, J.M.; Thelwall, M.: Is multidisciplinary research more highly cited? : a macrolevel study (2008) 0.03
    0.028560705 = product of:
      0.05712141 = sum of:
        0.05712141 = product of:
          0.11424282 = sum of:
            0.11424282 = weight(_text_:policy in 2375) [ClassicSimilarity], result of:
              0.11424282 = score(doc=2375,freq=4.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.41889322 = fieldWeight in 2375, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2375)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Interdisciplinary collaboration is a major goal in research policy. This study uses citation analysis to examine diverse subjects in the Web of Science and Scopus to ascertain whether, in general, research published in journals classified in more than one subject is more highly cited than research published in journals classified in a single subject. For each subject, the study divides the journals into two disjoint sets called Multi and Mono. Multi consists of all journals in the subject and at least one other subject whereas Mono consists of all journals in the subject and in no other subject. The main findings are: (a) For social science subject categories in both the Web of Science and Scopus, the average citation levels of articles in Mono and Multi are very similar; and (b) for Scopus subject categories within life sciences, health sciences, and physical sciences, the average citation level of Mono articles is roughly twice that of Multi articles. Hence, one cannot assume that in general, multidisciplinary research will be more highly cited, and the converse is probably true for many areas of science. A policy implication is that, at least in the sciences, multidisciplinary researchers should not be evaluated by citations on the same basis as monodisciplinary researchers.
  3. Nicolaisen, J.: Citation analysis (2007) 0.03
    0.027565593 = product of:
      0.055131186 = sum of:
        0.055131186 = product of:
          0.11026237 = sum of:
            0.11026237 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.11026237 = score(doc=6091,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 19:53:22
  4. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.024364771 = product of:
      0.048729543 = sum of:
        0.048729543 = product of:
          0.097459085 = sum of:
            0.097459085 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.097459085 = score(doc=3925,freq=4.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  5. Contreras, E.J.; Moneda, M. De La; Osma, E. Ruiz de; Bailón-Moreno, R.; Ruiz-Baños, R.: ¬A bibliometric model for journal discarding policy at academic libraries (2006) 0.02
    0.024234561 = product of:
      0.048469122 = sum of:
        0.048469122 = product of:
          0.096938245 = sum of:
            0.096938245 = weight(_text_:policy in 4920) [ClassicSimilarity], result of:
              0.096938245 = score(doc=4920,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.35544267 = fieldWeight in 4920, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4920)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Walters, G.D.: Measuring the utility of journals in the crime-psychology field : beyond the impact factor (2006) 0.02
    0.024234561 = product of:
      0.048469122 = sum of:
        0.048469122 = product of:
          0.096938245 = sum of:
            0.096938245 = weight(_text_:policy in 211) [ClassicSimilarity], result of:
              0.096938245 = score(doc=211,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.35544267 = fieldWeight in 211, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.046875 = fieldNorm(doc=211)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A measure of formal journal utility designed to offset some of the more noteworthy limitations of the impact factor (IF) - i.e., short follow-up, citations to items in the numerator that are not included in the denominator, self-citations, and the greater citation rate of review articles - was constructed and applied to 15 crime-psychology journals. This measure, referred to as Citations Per Article (CPA), was correlated with a measure of informal journal utility defined as the frequency with which 58 first authors in the field consulted these 15 crime-psychology journals. Results indicated that the CPA, but not the IF, correlated significantly with informal utility. Two journals (Law and Human Behavior and Criminal Justice and Behavior) displayed consistently high impact across measures of formal and informal utility while several other journals (Journal of Interpersonal Violence; Psychology, Public Policy, and Law; Sexual Abuse: A Journal of Research and Treatment; and Behavioral Sciences and the Law) showed signs of moderate impact when formal and informal measures were combined.
  7. Leydesdorff, L.: Betweenness centrality as an indicator of the interdisciplinarity of scientific journals (2007) 0.02
    0.024234561 = product of:
      0.048469122 = sum of:
        0.048469122 = product of:
          0.096938245 = sum of:
            0.096938245 = weight(_text_:policy in 453) [ClassicSimilarity], result of:
              0.096938245 = score(doc=453,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.35544267 = fieldWeight in 453, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.046875 = fieldNorm(doc=453)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In addition to science citation indicators of journals like impact and immediacy, social network analysis provides a set of centrality measures like degree, betweenness, and closeness centrality. These measures are first analyzed for the entire set of 7,379 journals included in the Journal Citation Reports of the Science Citation Index and the Social Sciences Citation Index 2004 (Thomson ISI, Philadelphia, PA), and then also in relation to local citation environments that can be considered as proxies of specialties and disciplines. Betweenness centrality is shown to be an indicator of the interdisciplinarity of journals, but only in local citation environments and after normalization; otherwise, the influence of degree centrality (size) overshadows the betweenness-centrality measure. The indicator is applied to a variety of citation environments, including policy-relevant ones like biotechnology and nanotechnology. The values of the indicator remain sensitive to the delineations of the set because of the indicator's local character. Maps showing interdisciplinarity of journals in terms of betweenness centrality can be drawn using information about journal citation environments, which is available online.
  8. Leydesdorff, L.: Caveats for the use of citation indicators in research and journal evaluations (2008) 0.02
    0.024234561 = product of:
      0.048469122 = sum of:
        0.048469122 = product of:
          0.096938245 = sum of:
            0.096938245 = weight(_text_:policy in 1361) [ClassicSimilarity], result of:
              0.096938245 = score(doc=1361,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.35544267 = fieldWeight in 1361, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1361)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Aging of publications, percentage of self-citations, and impact vary from journal to journal within fields of science. The assumption that citation and publication practices are homogenous within specialties and fields of science is invalid. Furthermore, the delineation of fields and among specialties is fuzzy. Institutional units of analysis and persons may move between fields or span different specialties. The match between the citation index and institutional profiles varies among institutional units and nations. The respective matches may heavily affect the representation of the units. Non-Institute of Scientific Information (ISI) journals are increasingly cornered into transdisciplinary Mode-2 functions with the exception of specialist journals publishing in languages other than English. An externally cited impact factor can be calculated for these journals. The citation impact of non-ISI journals will be demonstrated using Science and Public Policy as the example.
  9. Lewison, G.: ¬The work of the Bibliometrics Research Group (City University) and associates (2005) 0.02
    0.020674193 = product of:
      0.041348387 = sum of:
        0.041348387 = product of:
          0.08269677 = sum of:
            0.08269677 = weight(_text_:22 in 4890) [ClassicSimilarity], result of:
              0.08269677 = score(doc=4890,freq=2.0), product of:
                0.1781178 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05086421 = queryNorm
                0.46428138 = fieldWeight in 4890, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4890)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2007 17:02:22
  10. Brown, C.: ¬The evolution of preprints in the scholarly communication of physicists and astronomers (2001) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 5184) [ClassicSimilarity], result of:
              0.08078188 = score(doc=5184,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 5184, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5184)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In one of two bibliometric papers in this issue Brown looks at formal publication and citation of Eprints as shown by the policies and practices of 37 top tier physics journals, and by citation trends in ISI's SciSearch database and Journal Citation Reports. Citation analysis was carried out if Eprint cites were indicated by editor response, instruction to authors sections, reports in the literature, or actual examination of citation lists. Total contribution to 12 archives and their citation counts in the journals were compiled. Of the 13 editors surveyed that responded, 8 published papers that had appeared in the archive. Two of these required removal from the archive at publication; two of the 13 did not publish papers that have appeared as Eprints. A review journal that solicits its contributions allowed citation of Eprints. Seven allowed citations to Eprints, but were less than enthusiastic.Nearly 36,000 citations were made to the 12 archives. Citations to the 37 journals and their impact factors remain constant over the period of 1991 to 1998. Eprint citations appear to peak about 3 years after appearance as do citations to published papers. Contribution to the archives, and their use as measured by citation, is clearly growing. Citation form and publishing policy varies from journal to journal.
  11. Frandsen, T.F.; Rousseau, R.; Rowlands, I.: Diffusion factors (2006) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 5587) [ClassicSimilarity], result of:
              0.08078188 = score(doc=5587,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 5587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5587)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to clarify earlier work on journal diffusion metrics. Classical journal indicators such as the Garfield impact factor do not measure the breadth of influence across the literature of a particular journal title. As a new approach to measuring research influence, the study complements these existing metrics with a series of formally described diffusion factors. Design/methodology/approach - Using a publication-citation matrix as an organising construct, the paper develops formal descriptions of two forms of diffusion metric: "relative diffusion factors" and "journal diffusion factors" in both their synchronous and diachronous forms. It also provides worked examples for selected library and information science and economics journals, plus a sample of health information papers to illustrate their construction and use. Findings - Diffusion factors capture different aspects of the citation reception process than existing bibliometric measures. The paper shows that diffusion factors can be applied at the whole journal level or for sets of articles and that they provide a richer evidence base for citation analyses than traditional measures alone. Research limitations/implications - The focus of this paper is on clarifying the concepts underlying diffusion factors and there is unlimited scope for further work to apply these metrics to much larger and more comprehensive data sets than has been attempted here. Practical implications - These new tools extend the range of tools available for bibliometric, and possibly webometric, analysis. Diffusion factors might find particular application in studies where the research questions focus on the dynamic aspects of innovation and knowledge transfer. Originality/value - This paper will be of interest to those with theoretical interests in informetric distributions as well as those interested in science policy and innovation studies.
  12. Coleman, A.: Self-archiving and the copyright transfer agreements of ISI-ranked library and information science journals : analytic advantages (2007) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 72) [ClassicSimilarity], result of:
              0.08078188 = score(doc=72,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 72, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=72)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A study of Thomson-Scientific ISI ranked Library and Information Science (LIS) journals (n=52) is reported. The study examined the stances of publishers as expressed in the Copyright Transfer Agreements (CTAs) of the journals toward self-archiving, the practice of depositing digital copies of one's works in an Open Archives Initiative (OAI)-compliant open access repository. Sixty-two percent (32) do not make their CTAs available on the open Web; 38% (20) do. Of the 38% that do make CTAs available, two are open access journals. Of the 62% that do not have a publicly available CTA, 40% are silent about self-archiving. Even among the 20 journal CTAs publicly available there is a high level of ambiguity. Closer examination augmented by publisher policy documents on copyright, self-archiving, and instructions to authors reveals that only five, 10% of the ISI-ranked LIS journals in the study, actually prohibit self-archiving by publisher rule. Copyright is a moving target, but publishers appear to be acknowledging that copyright and open access can co-exist in scholarly journal publishing. The ambivalence of LIS journal publishers provides unique opportunities to members of the community. Authors can self-archive in open access archives. A societyled, global scholarly communication consortium can engage in the strategic building of the LIS information commons. Aggregating OAI-compliant archives and developing disciplinary-specific library services for an LIS commons has the potential to increase the field's research impact and visibility. It may also ameliorate its own scholarly communication and publishing systems and serve as a model for others.
  13. Leydesdorff, L.: Visualization of the citation impact environments of scientific journals : an online mapping exercise (2007) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 82) [ClassicSimilarity], result of:
              0.08078188 = score(doc=82,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 82, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=82)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Aggregated journal-journal citation networks based on the Journal Citation Reports 2004 of the Science Citation Index (5,968 journals) and the Social Science Citation Index (1,712 journals) are made accessible from the perspective of any of these journals. A vector-space model Is used for normalization, and the results are brought online at http://www.leydesdorff.net/jcr04 as input files for the visualization program Pajek. The user is thus able to analyze the citation environment in terms of links and graphs. Furthermore, the local impact of a journal is defined as its share of the total citations in the specific journal's citation environments; the vertical size of the nodes is varied proportionally to this citation impact. The horizontal size of each node can be used to provide the same information after correction for within-journal (self-)citations. In the "citing" environment, the equivalents of this measure can be considered as a citation activity index which maps how the relevant journal environment is perceived by the collective of authors of a given journal. As a policy application, the mechanism of Interdisciplinary developments among the sciences is elaborated for the case of nanotechnology journals.
  14. Joint, N.: Bemused by bibliometrics : using citation analysis to evaluate research quality (2008) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 1900) [ClassicSimilarity], result of:
              0.08078188 = score(doc=1900,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 1900, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1900)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to examine the way in which library and information science (LIS) issues have been handled in the formulation of recent UK Higher Education policy concerned with research quality evaluation. Design/methodology/approach - A chronological review of decision making about digital rights arrangements for the 2008 Research Assessment Exercise (RAE), and of recent announcements about the new shape of metrics-based assessment in the Research Excellence Framework, which supersedes the RAE. Against this chronological framework, the likely nature of LIS practitioner reactions to the flow of decision making is suggested. Findings - It was found that a weak grasp of LIS issues by decision makers undermines the process whereby effective research evaluation models are created. LIS professional opinion should be sampled before key decisions are made. Research limitations/implications - This paper makes no sophisticated comments on the complex research issues underlying advanced bibliometric research evaluation models. It does point out that sophisticated and expensive bibliometric consultancies arrive at many conclusions about metrics-based research assessment that are common knowledge amongst LIS practitioners. Practical implications - Practical difficulties arise when one announces a decision to move to a new and specific type of research evaluation indicator before one has worked out anything very specific about that indicator. Originality/value - In this paper, the importance of information management issues to the mainstream issues of government and public administration is underlined. The most valuable conclusion of this paper is that, because LIS issues are now at the heart of democratic decision making, LIS practitioners and professionals should be given some sort of role in advising on such matters.
  15. Järvelin, K.; Persson, O.: ¬The DCI index : discounted cumulated impact-based research evaluation (2008) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 2694) [ClassicSimilarity], result of:
              0.08078188 = score(doc=2694,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 2694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2694)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Research evaluation is increasingly popular and important among research funding bodies and science policy makers. Various indicators have been proposed to evaluate the standing of individual scientists, institutions, journals, or countries. A simple and popular one among the indicators is the h-index, the Hirsch index (Hirsch 2005), which is an indicator for lifetime achievement of a scholar. Several other indicators have been proposed to complement or balance the h-index. However, these indicators have no conception of aging. The AR-index (Jin et al. 2007) incorporates aging but divides the received citation counts by the raw age of the publication. Consequently, the decay of a publication is very steep and insensitive to disciplinary differences. In addition, we believe that a publication becomes outdated only when it is no longer cited, not because of its age. Finally, all indicators treat citations as equally material when one might reasonably think that a citation from a heavily cited publication should weigh more than a citation froma non-cited or little-cited publication.We propose a new indicator, the Discounted Cumulated Impact (DCI) index, which devalues old citations in a smooth way. It rewards an author for receiving new citations even if the publication is old. Further, it allows weighting of the citations by the citation weight of the citing publication. DCI can be used to calculate research performance on the basis of the h-core of a scholar or any other publication data.
  16. Rafols, I.; Leydesdorff, L.: Content-based and algorithmic classifications of journals : perspectives on the dynamics of scientific communication and indexer effects (2009) 0.02
    0.02019547 = product of:
      0.04039094 = sum of:
        0.04039094 = product of:
          0.08078188 = sum of:
            0.08078188 = weight(_text_:policy in 3095) [ClassicSimilarity], result of:
              0.08078188 = score(doc=3095,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.29620224 = fieldWeight in 3095, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3095)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The aggregated journal-journal citation matrix - based on the Journal Citation Reports (JCR) of the Science Citation Index - can be decomposed by indexers or algorithmically. In this study, we test the results of two recently available algorithms for the decomposition of large matrices against two content-based classifications of journals: the ISI Subject Categories and the field/subfield classification of Glänzel and Schubert (2003). The content-based schemes allow for the attribution of more than a single category to a journal, whereas the algorithms maximize the ratio of within-category citations over between-category citations in the aggregated category-category citation matrix. By adding categories, indexers generate between-category citations, which may enrich the database, for example, in the case of inter-disciplinary developments. Algorithmic decompositions, on the other hand, are more heavily skewed towards a relatively small number of categories, while this is deliberately counter-acted upon in the case of content-based classifications. Because of the indexer effects, science policy studies and the sociology of science should be careful when using content-based classifications, which are made for bibliographic disclosure, and not for the purpose of analyzing latent structures in scientific communications. Despite the large differences among them, the four classification schemes enable us to generate surprisingly similar maps of science at the global level. Erroneous classifications are cancelled as noise at the aggregate level, but may disturb the evaluation locally.
  17. Stock, W.G.; Weber, S.: Facets of informetrics : Preface (2006) 0.02
    0.016156374 = product of:
      0.032312747 = sum of:
        0.032312747 = product of:
          0.064625494 = sum of:
            0.064625494 = weight(_text_:policy in 76) [ClassicSimilarity], result of:
              0.064625494 = score(doc=76,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.23696178 = fieldWeight in 76, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.03125 = fieldNorm(doc=76)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    According to Jean M. Tague-Sutcliffe "informetrics" is "the study of the quantitative aspects of information in any form, not just records or bibliographies, and in any social group, not just scientists" (Tague-Sutcliffe, 1992, 1). Leo Egghe also defines "informetrics" in a very broad sense. "(W)e will use the term' informetrics' as the broad term comprising all-metrics studies related to information science, including bibliometrics (bibliographies, libraries,...), scientometrics (science policy, citation analysis, research evaluation,...), webometrics (metrics of the web, the Internet or other social networks such as citation or collaboration networks), ..." (Egghe, 2005b,1311). According to Concepcion S. Wilson "informetrics" is "the quantitative study of collections of moderatesized units of potentially informative text, directed to the scientific understanding of information processes at the social level" (Wilson, 1999, 211). We should add to Wilson's units of text also digital collections of images, videos, spoken documents and music. Dietmar Wolfram divides "informetrics" into two aspects, "system-based characteristics that arise from the documentary content of IR systems and how they are indexed, and usage-based characteristics that arise how users interact with system content and the system interfaces that provide access to the content" (Wolfram, 2003, 6). We would like to follow Tague-Sutcliffe, Egghe, Wilson and Wolfram (and others, for example Björneborn & Ingwersen, 2004) and call this broad research of empirical information science "informetrics". Informetrics includes therefore all quantitative studies in information science. If a scientist performs scientific investigations empirically, e.g. on information users' behavior, on scientific impact of academic journals, on the development of the patent application activity of a company, on links of Web pages, on the temporal distribution of blog postings discussing a given topic, on availability, recall and precision of retrieval systems, on usability of Web sites, and so on, he or she contributes to informetrics. We see three subject areas in information science in which such quantitative research takes place, - information users and information usage, - evaluation of information systems, - information itself, Following Wolfram's article, we divide his system-based characteristics into the "information itself "-category and the "information system"-category. Figure 1 is a simplistic graph of subjects and research areas of informetrics as an empirical information science.
  18. Bonitz, M.: Ranking of nations and heightened competition in Matthew core journals : two faces of the Matthew effect for countries (2002) 0.02
    0.016156374 = product of:
      0.032312747 = sum of:
        0.032312747 = product of:
          0.064625494 = sum of:
            0.064625494 = weight(_text_:policy in 818) [ClassicSimilarity], result of:
              0.064625494 = score(doc=818,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.23696178 = fieldWeight in 818, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.03125 = fieldNorm(doc=818)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Matthew effect for countries (MEC) consists of the systematic deviation in the number of actual (observed) citations from the number of expected citations: A few countries, expecting a high impact (i.e., a high number of cites per paper) receive a surplus of citations, while the majority of countries, expecting a lower impact, lose citations. The MEC is characterized by numerous facets, but two are the most impressive. The first is the possibility of ranking the science nations by their overall efficiency of scientific performance, thus making the MEC attractive for science policy. The second is the concentration of the MEC in a small number of scientific journals which happen to be the most competitive markets for scientific papers and, therefore, are of interest to librarians as well as scientists. First, by using an appropriate measure for the above-mentioned deviation of the observed from the expected citation rate one can bring the countries under investigation into a rank order, which is almost stable over time and independent of the main scientific fields and the size (i.e., publication output) of the participating countries. Metaphorically speaking, this country rank distribution shows the extent to which a country is using its scientific talents. This is the first facet of the MEC. The second facet appears when one studies the mechanism (i.e., microstructure) of the MEC. Every journal contributes to the MEC. The "atoms" of the MEC are redistributed citations, whose number turns out to be a new and sensitive indicator for any scientific journal. Bringing the journals into a rank order according to this indicator, one finds that only 144 journals out of 2,712 contain half of all redistributed citations, and thus account for half of the MEC. We give a list of these "Matthew core journals" (MCJ) together with a new typology relating the new indicator to the well-known ones, such as publication or citation numbers. It is our hypothesis that the MCJ are forums of the fiercest competition in science--the "Olympic games in science" proceed in this highest class of scientific journals.
  19. Hamilton, E.C.: ¬The impact of survey data : measuring success (2007) 0.02
    0.016156374 = product of:
      0.032312747 = sum of:
        0.032312747 = product of:
          0.064625494 = sum of:
            0.064625494 = weight(_text_:policy in 71) [ClassicSimilarity], result of:
              0.064625494 = score(doc=71,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.23696178 = fieldWeight in 71, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.03125 = fieldNorm(doc=71)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Large national social surveys are expensive to conduct and to process into usable data files. The purpose of this article is to assess the impact of these national data sets on research using bibliometric measures. Peer-reviewed articles from research using numeric data files and documentation from the Canadian National Population Health Survey (NPHS) were searched in ISI's Web of Science and in Scopus for articles citing the original research. This article shows that articles using NPHS data files and products have been used by a diverse and global network of scholars, practitioners, methodologists, and policy makers. Shifts in electronic publishing and the emergence of new tools for citation analysis are changing the discovery process for published and unpublished work based on inputs to the research process. Evidence of use of large surveys throughout the knowledge transfer process can be critical in assessing grant and operating funding levels for research units, and in influencing design, methodology, and access channels in planning major surveys. The project has gathered citations from the peer-reviewed article stage of knowledge transfer, providing valuable evidence on the use of the data files and methodologies of the survey and of limitations of the survey. Further work can be done to expand the scope of material cited and analyze the data to understand how the longitudinal aspect of the survey contributes to the value of the research output. Building a case for continued funding of national, longitudinal surveys is a challenge. As far as I am aware, however, little use has been made of citation tracking to assess the long-term value of such surveys. Conducting citation analysis on research inputs (data file use and survey products) provides a tangible assessment of the value accrued from large-scale (and expensive) national surveys.
  20. Reedijk, J.; Moed, H.F.: Is the impact of journal impact factors decreasing? (2008) 0.02
    0.016156374 = product of:
      0.032312747 = sum of:
        0.032312747 = product of:
          0.064625494 = sum of:
            0.064625494 = weight(_text_:policy in 1734) [ClassicSimilarity], result of:
              0.064625494 = score(doc=1734,freq=2.0), product of:
                0.2727254 = queryWeight, product of:
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.05086421 = queryNorm
                0.23696178 = fieldWeight in 1734, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.361833 = idf(docFreq=563, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1734)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The purpose of this paper is to examine the effects of the use of the citation-based journal impact factor for evaluative purposes upon the behaviour of authors and editors. It seeks to give a critical examination of a number of claims as regards the manipulability of this indicator on the basis of an empirical analysis of publication and referencing practices of authors and journal editors Design/methodology/approach - The paper describes mechanisms that may affect the numerical values of journal impact factors. It also analyses general, "macro" patterns in large samples of journals in order to obtain indications of the extent to which such mechanisms are actually applied on a large scale. Finally it presents case studies of particular science journals in order to illustrate what their effects may be in individual cases. Findings - The paper shows that the commonly used journal impact factor can to some extent be relatively easily manipulated. It discusses several types of strategic editorial behaviour, and presents cases in which journal impact factors were - intentionally or otherwise - affected by particular editorial strategies. These findings lead to the conclusion that one must be most careful in interpreting and using journal impact factors, and that authors, editors and policy makers must be aware of their potential manipulability. They also show that some mechanisms occur as of yet rather infrequently, while for others it is most difficult if not impossible to assess empirically how often they are actually applied. If their frequency of occurrence increases, one should come to the conclusion that the impact of impact factors is decreasing. Originality/value - The paper systematically describes a number of claims about the manipulability of journal impact factors that are often based on "informal" or even anecdotal evidences and illustrates how these claims can be further examined in thorough empirical research of large data samples.

Languages

  • e 44
  • d 5

Types

  • a 48
  • el 1
  • m 1
  • More… Less…

Classifications