Search (35 results, page 1 of 2)

  • × theme_ss:"Citation indexing"
  1. Larivière, V.; Gingras, Y.; Archambault, E.: ¬The decline in the concentration of citations, 1900-2007 (2009) 0.09
    0.08855899 = product of:
      0.17711797 = sum of:
        0.10374144 = weight(_text_:fields in 2763) [ClassicSimilarity], result of:
          0.10374144 = score(doc=2763,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.32825118 = fieldWeight in 2763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.046875 = fieldNorm(doc=2763)
        0.073376544 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
          0.073376544 = score(doc=2763,freq=4.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.32829654 = fieldWeight in 2763, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2763)
      0.5 = coord(2/4)
    
    Abstract
    This article challenges recent research (Evans, 2008) reporting that the concentration of cited scientific literature increases with the online availability of articles and journals. Using Thomson Reuters' Web of Science, the present article analyses changes in the concentration of citations received (2- and 5-year citation windows) by papers published between 1900 and 2005. Three measures of concentration are used: the percentage of papers that received at least one citation (cited papers); the percentage of papers needed to account for 20%, 50%, and 80% of the citations; and the Herfindahl-Hirschman index (HHI). These measures are used for four broad disciplines: natural sciences and engineering, medical fields, social sciences, and the humanities. All these measures converge and show that, contrary to what was reported by Evans, the dispersion of citations is actually increasing.
    Date
    22. 3.2009 19:22:35
  2. Peritz, B.C.: ¬A classification of citation roles for the social sciences and related fields (1983) 0.06
    0.06051584 = product of:
      0.24206336 = sum of:
        0.24206336 = weight(_text_:fields in 3073) [ClassicSimilarity], result of:
          0.24206336 = score(doc=3073,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.76591945 = fieldWeight in 3073, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.109375 = fieldNorm(doc=3073)
      0.25 = coord(1/4)
    
  3. Sombatsompop, N.; Markpin, T.: Making an equality of ISI impact factors for different subject fields (2005) 0.06
    0.057993226 = product of:
      0.2319729 = sum of:
        0.2319729 = weight(_text_:fields in 3467) [ClassicSimilarity], result of:
          0.2319729 = score(doc=3467,freq=10.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.733992 = fieldWeight in 3467, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.046875 = fieldNorm(doc=3467)
      0.25 = coord(1/4)
    
    Abstract
    The journal impact factors, published by the Institute for Scientific Information (ISI; Philadelphia, PA), are widely known and are used to evaluate overall journal quality and the quality of the papers published therein. However, when making comparisons between subject fields, the work of individual scientists and their research institutions as reflected in their articles' ISI impact factors can become meaningless. This inequality will remain as long as ISI impact factors are employed as an instrument to assess the quality of international research. Here we propose a new mathematical index entitled Impact Factor PointAverage (IFPA) for assessment of the quality of individual research work in different subject fields. The index is established based an a normalization of differences in impact factors, rankings, and number of journal titles in different subject fields. The proposed index is simple and enables the ISI impact factors to be used with equality, especially when evaluating the quality of research work in different subject fields.
  4. Leydesdorff, L.: Caveats for the use of citation indicators in research and journal evaluations (2008) 0.05
    0.05187072 = product of:
      0.20748287 = sum of:
        0.20748287 = weight(_text_:fields in 1361) [ClassicSimilarity], result of:
          0.20748287 = score(doc=1361,freq=8.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.65650237 = fieldWeight in 1361, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.046875 = fieldNorm(doc=1361)
      0.25 = coord(1/4)
    
    Abstract
    Aging of publications, percentage of self-citations, and impact vary from journal to journal within fields of science. The assumption that citation and publication practices are homogenous within specialties and fields of science is invalid. Furthermore, the delineation of fields and among specialties is fuzzy. Institutional units of analysis and persons may move between fields or span different specialties. The match between the citation index and institutional profiles varies among institutional units and nations. The respective matches may heavily affect the representation of the units. Non-Institute of Scientific Information (ISI) journals are increasingly cornered into transdisciplinary Mode-2 functions with the exception of specialist journals publishing in languages other than English. An externally cited impact factor can be calculated for these journals. The citation impact of non-ISI journals will be demonstrated using Science and Public Policy as the example.
  5. Thelwall, M.; Kousha, K.; Stuart, E.; Makita, M.; Abdoli, M.; Wilson, P.; Levitt, J.: In which fields are citations indicators of research quality? (2023) 0.04
    0.0432256 = product of:
      0.1729024 = sum of:
        0.1729024 = weight(_text_:fields in 1033) [ClassicSimilarity], result of:
          0.1729024 = score(doc=1033,freq=8.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.54708534 = fieldWeight in 1033, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1033)
      0.25 = coord(1/4)
    
    Abstract
    Citation counts are widely used as indicators of research quality to support or replace human peer review and for lists of top cited papers, researchers, and institutions. Nevertheless, the relationship between citations and research quality is poorly evidenced. We report the first large-scale science-wide academic evaluation of the relationship between research quality and citations (field normalized citation counts), correlating them for 87,739 journal articles in 34 field-based UK Units of Assessment (UoA). The two correlate positively in all academic fields, from very weak (0.1) to strong (0.5), reflecting broadly linear relationships in all fields. We give the first evidence that the correlations are positive even across the arts and humanities. The patterns are similar for the field classification schemes of Scopus and Dimensions.ai, although varying for some individual subjects and therefore more uncertain for these. We also show for the first time that no field has a citation threshold beyond which all articles are excellent quality, so lists of top cited articles are not pure collections of excellence, and neither is any top citation percentile indicator. Thus, while appropriately field normalized citations associate positively with research quality in all fields, they never perfectly reflect it, even at high values.
  6. Nicolaisen, J.: Citation analysis (2007) 0.03
    0.034590032 = product of:
      0.13836013 = sum of:
        0.13836013 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
          0.13836013 = score(doc=6091,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.61904186 = fieldWeight in 6091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=6091)
      0.25 = coord(1/4)
    
    Date
    13. 7.2008 19:53:22
  7. Døsen, K.: One more reference on self-reference (1992) 0.03
    0.034590032 = product of:
      0.13836013 = sum of:
        0.13836013 = weight(_text_:22 in 4604) [ClassicSimilarity], result of:
          0.13836013 = score(doc=4604,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.61904186 = fieldWeight in 4604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=4604)
      0.25 = coord(1/4)
    
    Date
    7. 2.2005 14:10:22
  8. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.03
    0.030573556 = product of:
      0.122294225 = sum of:
        0.122294225 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
          0.122294225 = score(doc=3925,freq=4.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.54716086 = fieldWeight in 3925, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=3925)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 15:22:28
  9. Zhao, D.; Strotmann, A.: Can citation analysis of Web publications better detect research fronts? (2007) 0.03
    0.030565115 = product of:
      0.12226046 = sum of:
        0.12226046 = weight(_text_:fields in 471) [ClassicSimilarity], result of:
          0.12226046 = score(doc=471,freq=4.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.38684773 = fieldWeight in 471, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=471)
      0.25 = coord(1/4)
    
    Abstract
    We present evidence that in some research fields, research published in journals and reported on the Web may collectively represent different evolutionary stages of the field, with journals lagging a few years behind the Web on average, and that a "two-tier" scholarly communication system may therefore be evolving. We conclude that in such fields, (a) for detecting current research fronts, author co-citation analyses (ACA) using articles published on the Web as a data source can outperform traditional ACAs using articles published in journals as data, and that (b) as a result, it is important to use multiple data sources in citation analysis studies of scholarly communication for a complete picture of communication patterns. Our evidence stems from comparing the respective intellectual structures of the XML research field, a subfield of computer science, as revealed from three sets of ACA covering two time periods: (a) from the field's beginnings in 1996 to 2001, and (b) from 2001 to 2006. For the first time period, we analyze research articles both from journals as indexed by the Science Citation Index (SCI) and from the Web as indexed by CiteSeer. We follow up by an ACA of SCI data for the second time period. We find that most trends in the evolution of this field from the first to the second time period that we find when comparing ACA results from the SCI between the two time periods already were apparent in the ACA results from CiteSeer during the first time period.
  10. Robinson-García, N.; Jiménez-Contreras, E.; Torres-Salinas, D.: Analyzing data citation practices using the data citation index : a study of backup strategies of end users (2016) 0.03
    0.030565115 = product of:
      0.12226046 = sum of:
        0.12226046 = weight(_text_:fields in 3225) [ClassicSimilarity], result of:
          0.12226046 = score(doc=3225,freq=4.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.38684773 = fieldWeight in 3225, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3225)
      0.25 = coord(1/4)
    
    Abstract
    We present an analysis of data citation practices based on the Data Citation Index (DCI) (Thomson Reuters). This database launched in 2012 links data sets and data studies with citations received from the other citation indexes. The DCI harvests citations to research data from papers indexed in the Web of Science. It relies on the information provided by the data repository. The findings of this study show that data citation practices are far from common in most research fields. Some differences have been reported on the way researchers cite data: Although in the areas of science and engineering & technology data sets were the most cited, in the social sciences and arts & humanities data studies play a greater role. A total of 88.1% of the records have received no citation, but some repositories show very low uncitedness rates. Although data citation practices are rare in most fields, they have expanded in disciplines such as crystallography and genomics. We conclude by emphasizing the role that the DCI could play in encouraging the consistent, standardized citation of research data-a role that would enhance their value as a means of following the research process from data collection to publication.
  11. Braun, T.; Glanzel, W.; Grupp, H.: ¬The scientometric weight of 50 nations in 27 scientific areas, 1989-1993 : Pt.1: All fields combined, mathematics, engineering, chemistry and physics (1995) 0.03
    0.03025792 = product of:
      0.12103168 = sum of:
        0.12103168 = weight(_text_:fields in 761) [ClassicSimilarity], result of:
          0.12103168 = score(doc=761,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.38295972 = fieldWeight in 761, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0546875 = fieldNorm(doc=761)
      0.25 = coord(1/4)
    
  12. Meadows, A.J.: ¬The citation characteristics of astronomical research literature (2004) 0.03
    0.03025792 = product of:
      0.12103168 = sum of:
        0.12103168 = weight(_text_:fields in 4416) [ClassicSimilarity], result of:
          0.12103168 = score(doc=4416,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.38295972 = fieldWeight in 4416, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4416)
      0.25 = coord(1/4)
    
    Abstract
    The citation characteristics of papers in the Monthly Notices of the Royal Astronomical Society (especially for the years 1963-1965) have been examined as a means of studying the usage of astronomical literature in the UK. The decrease of usage with age has been investigated and the decay half-life determined. Particular attention has been paid to the immediacy effect, and to its possible variation in different sub-fields of astronomy. The citations have also been separated according to journal of origin. As a result of this study, a quantitative estimate has been made of the titles and backruns that are required to satisfy a given percentage of the demand for astronomical research literature in this country.
  13. Hellqvist, B.: Referencing in the humanities and its implications for citation analysis (2010) 0.03
    0.03025792 = product of:
      0.12103168 = sum of:
        0.12103168 = weight(_text_:fields in 3329) [ClassicSimilarity], result of:
          0.12103168 = score(doc=3329,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.38295972 = fieldWeight in 3329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3329)
      0.25 = coord(1/4)
    
    Abstract
    This article studies citation practices in the arts and humanities from a theoretical and conceptual viewpoint, drawing on studies from fields like linguistics, history, library & information science, and the sociology of science. The use of references in the humanities is discussed in connection with the growing interest in the possibilities of applying citation analysis to humanistic disciplines. The study shows how the use of references within the humanities is connected to concepts of originality, to intellectual organization, and to searching and writing. Finally, it is acknowledged that the use of references is connected to stylistic, epistemological, and organizational differences, and these differences must be taken into account when applying citation analysis to humanistic disciplines.
  14. Wouters, P.; Vries, R. de: Formally citing the Web (2004) 0.02
    0.024452092 = product of:
      0.09780837 = sum of:
        0.09780837 = weight(_text_:fields in 3093) [ClassicSimilarity], result of:
          0.09780837 = score(doc=3093,freq=4.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.3094782 = fieldWeight in 3093, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.03125 = fieldNorm(doc=3093)
      0.25 = coord(1/4)
    
    Abstract
    How do authors refer to Web-based information sources in their formal scientific publications? It is not yet weIl known how scientists and scholars actually include new types of information sources, available through the new media, in their published work. This article reports an a comparative study of the lists of references in 38 scientific journals in five different scientific and social scientific fields. The fields are sociology, library and information science, biochemistry and biotechnology, neuroscience, and the mathematics of computing. As is weIl known, references, citations, and hyperlinks play different roles in academic publishing and communication. Our study focuses an hyperlinks as attributes of references in formal scholarly publications. The study developed and applied a method to analyze the differential roles of publishing media in the analysis of scientific and scholarly literature references. The present secondary databases that include reference and citation data (the Web of Science) cannot be used for this type of research. By the automated processing and analysis of the full text of scientific and scholarly articles, we were able to extract the references and hyperlinks contained in these references in relation to other features of the scientific and scholarly literature. Our findings show that hyperlinking references are indeed, as expected, abundantly present in the formal literature. They also tend to cite more recent literature than the average reference. The large majority of the references are to Web instances of traditional scientific journals. Other types of Web-based information sources are less weIl represented in the lists of references, except in the case of pure e-journals. We conclude that this can be explained by taking the role of the publisher into account. Indeed, it seems that the shift from print-based to electronic publishing has created new roles for the publisher. By shaping the way scientific references are hyperlinking to other information sources, the publisher may have a large impact an the availability of scientific and scholarly information.
  15. Garfield, E.; Stock, W.G.: Citation Consciousness : Interview with Eugene Garfiels, chairman emeritus of ISI; Philadelphia (2002) 0.02
    0.02161877 = product of:
      0.08647508 = sum of:
        0.08647508 = weight(_text_:22 in 613) [ClassicSimilarity], result of:
          0.08647508 = score(doc=613,freq=2.0), product of:
            0.2235069 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06382575 = queryNorm
            0.38690117 = fieldWeight in 613, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=613)
      0.25 = coord(1/4)
    
    Source
    Password. 2002, H.6, S.22-25
  16. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 4279) [ClassicSimilarity], result of:
          0.0864512 = score(doc=4279,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 4279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
      0.25 = coord(1/4)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  17. Pudovkin, A.I.; Garfield, E.: Algorithmic procedure for finding semantically related journals (2002) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 5220) [ClassicSimilarity], result of:
          0.0864512 = score(doc=5220,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 5220, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5220)
      0.25 = coord(1/4)
    
    Abstract
    Journal Citation Reports provides a classification of journals most heavily cited by a given journal and which most heavily cite that journal, but size variation is not taken into account. Pudovkin and Garfield suggest a procedure for meeting this difficulty. The relatedness of journal i to journal j is determined by the number of citations from journal i to journal j in a given year normalized by the product of the papers published in the j journal in that year times the number of references cited in the i journal in that year. A multiplier of ten to the sixth is suggested to bring the values into an easily perceptible range. While citations received depend upon the overall cumulative number of papers published by a journal, the current year is utilized since that data is available in JCR. Citations to current year papers would be quite low in most fields and thus not included. To produce the final index, the maximum of the A citing B value, and the B citing A value is chosen and used to indicate the closeness of the journals. The procedure is illustrated for the journal Genetics.
  18. Aström, F.: Changes in the LIS research front : time-sliced cocitation analyses of LIS journal articles, 1990-2004 (2007) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 329) [ClassicSimilarity], result of:
          0.0864512 = score(doc=329,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 329, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=329)
      0.25 = coord(1/4)
    
    Abstract
    Based on articles published in 1990-2004 in 21 library and information science (LIS) journals, a set of cocitation analyses was performed to study changes in research fronts over the last 15 years, where LIS is at now, and to discuss where it is heading. To study research fronts, here defined as current and influential cocited articles, a citations among documents methodology was applied; and to study changes, the analyses were time-sliced into three 5-year periods. The results show a stable structure of two distinct research fields: informetrics and information seeking and retrieval (ISR). However, experimental retrieval research and user oriented research have merged into one ISR field; and IR and informetrics also show signs of coming closer together, sharing research interests and methodologies, making informetrics research more visible in mainstream LIS research. Furthermore, the focus on the Internet, both in ISR research and in informetrics-where webometrics quickly has become a dominating research area-is an important change. The future is discussed in terms of LIS dependency on technology, how integration of research areas as well as technical systems can be expected to continue to characterize LIS research, and how webometrics will continue to develop and find applications.
  19. Raan, A.F.J. van: Self-citation as an impact-reinforcing mechanism in the science system (2008) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 2006) [ClassicSimilarity], result of:
          0.0864512 = score(doc=2006,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 2006, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2006)
      0.25 = coord(1/4)
    
    Abstract
    Previous research has demonstrated that lower performance groups have a larger size-dependent cumulative advantage for receiving citations than do top-performance groups. Furthermore, regardless of performance, larger groups have less not-cited publications. Particularly for the lower performance groups, the fraction of not-cited publications decreases considerably with size. These phenomena can be explained with a model in which self-citation acts as a promotion mechanism for external citations. In this article, we show that for self-citations, similar size-dependent scaling rules apply as for citations, but generally the power law exponents are higher for self-citations as compared to citations. We also find that the fraction of self-citations is smaller for the higher performance groups, and this fraction decreases more rapidly with increasing journal impact than that for lower performance groups. An interesting novel finding is that the variance in the correlation of the number of self-citations with size is considerably less than the variance for external citations. This is a clear indication that size is a stronger determinant for self-citations than it is for external citations. Both higher and particularly lower performance groups have a size-dependent cumulative advantage for self-citations, but for the higher performance groups only in the lower impact journals and in fields with low citation density.
  20. Gorraiz, J.; Purnell, P.J.; Glänzel, W.: Opportunities for and limitations of the Book Citation Index (2013) 0.02
    0.0216128 = product of:
      0.0864512 = sum of:
        0.0864512 = weight(_text_:fields in 966) [ClassicSimilarity], result of:
          0.0864512 = score(doc=966,freq=2.0), product of:
            0.31604284 = queryWeight, product of:
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.06382575 = queryNorm
            0.27354267 = fieldWeight in 966, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.951651 = idf(docFreq=849, maxDocs=44218)
              0.0390625 = fieldNorm(doc=966)
      0.25 = coord(1/4)
    
    Abstract
    This article offers important background information about a new product, the Book Citation Index (BKCI), launched in 2011 by Thomson Reuters. Information is illustrated by some new facts concerning The BKCI's use in bibliometrics, coverage analysis, and a series of idiosyncrasies worthy of further discussion. The BKCI was launched primarily to assist researchers identify useful and relevant research that was previously invisible to them, owing to the lack of significant book content in citation indexes such as the Web of Science. So far, the content of 33,000 books has been added to the desktops of the global research community, the majority in the arts, humanities, and social sciences fields. Initial analyses of the data from The BKCI have indicated that The BKCI, in its current version, should not be used for bibliometric or evaluative purposes. The most significant limitations to this potential application are the high share of publications without address information, the inflation of publication counts, the lack of cumulative citation counts from different hierarchical levels, and inconsistency in citation counts between the cited reference search and the book citation index. However, The BKCI is a first step toward creating a reliable and necessary citation data source for monographs - a very challenging issue, because, unlike journals and conference proceedings, books have specific requirements, and several problems emerge not only in the context of subject classification, but also in their role as cited publications and in citing publications.