Search (445 results, page 2 of 23)

  • × theme_ss:"Informetrie"
  1. Meadows, J.: ¬The immediacy effect - then and now (2004) 0.03
    0.025199067 = product of:
      0.10079627 = sum of:
        0.02834915 = weight(_text_:libraries in 4418) [ClassicSimilarity], result of:
          0.02834915 = score(doc=4418,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2177704 = fieldWeight in 4418, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=4418)
        0.07244711 = weight(_text_:studies in 4418) [ClassicSimilarity], result of:
          0.07244711 = score(doc=4418,freq=6.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.45816267 = fieldWeight in 4418, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4418)
      0.25 = coord(2/8)
    
    Abstract
    The 1960s saw the birth of what is now called "scientometrics". One of the queries that arose then related to citations of previous literature. Was recent literature cited proportionately more than older literature? Studies by Price, along with that reprinted here, seemed to indicate that the answer was "yes". This "immediacy effect", as it was labelled, could be measured in quantitative terms, but how to do so required some thought. For example, what was the best form of index for representing immediacy, and what errors were involved in estimating the effect? Discussions of the usage of past literature could have practical implications for libraries. One question, therefore, was what implications, if any, citation studies had for the provision of journals to library users. On the scientometrics side, there were such questions as why an immediacy effect occurred and to what extent it could be discerned in different subject areas. This article surveys attempts to examine questions like these over the period from the 1960s to the present day, updating an article published in Journal of Documentation in 1967. Keywords: Literature, Records management, User studies
  2. Rafols, I.; Leydesdorff, L.: Content-based and algorithmic classifications of journals : perspectives on the dynamics of scientific communication and indexer effects (2009) 0.02
    0.023673836 = product of:
      0.094695345 = sum of:
        0.059839215 = weight(_text_:case in 3095) [ClassicSimilarity], result of:
          0.059839215 = score(doc=3095,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.34346986 = fieldWeight in 3095, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3095)
        0.034856133 = weight(_text_:studies in 3095) [ClassicSimilarity], result of:
          0.034856133 = score(doc=3095,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 3095, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3095)
      0.25 = coord(2/8)
    
    Abstract
    The aggregated journal-journal citation matrix - based on the Journal Citation Reports (JCR) of the Science Citation Index - can be decomposed by indexers or algorithmically. In this study, we test the results of two recently available algorithms for the decomposition of large matrices against two content-based classifications of journals: the ISI Subject Categories and the field/subfield classification of Glänzel and Schubert (2003). The content-based schemes allow for the attribution of more than a single category to a journal, whereas the algorithms maximize the ratio of within-category citations over between-category citations in the aggregated category-category citation matrix. By adding categories, indexers generate between-category citations, which may enrich the database, for example, in the case of inter-disciplinary developments. Algorithmic decompositions, on the other hand, are more heavily skewed towards a relatively small number of categories, while this is deliberately counter-acted upon in the case of content-based classifications. Because of the indexer effects, science policy studies and the sociology of science should be careful when using content-based classifications, which are made for bibliographic disclosure, and not for the purpose of analyzing latent structures in scientific communications. Despite the large differences among them, the four classification schemes enable us to generate surprisingly similar maps of science at the global level. Erroneous classifications are cancelled as noise at the aggregate level, but may disturb the evaluation locally.
  3. Leydesdorff, L.; Zhou, P.; Bornmann, L.: How can journal impact factors be normalized across fields of science? : An assessment in terms of percentile ranks and fractional counts (2013) 0.02
    0.023673836 = product of:
      0.094695345 = sum of:
        0.059839215 = weight(_text_:case in 532) [ClassicSimilarity], result of:
          0.059839215 = score(doc=532,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.34346986 = fieldWeight in 532, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=532)
        0.034856133 = weight(_text_:studies in 532) [ClassicSimilarity], result of:
          0.034856133 = score(doc=532,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.22043361 = fieldWeight in 532, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=532)
      0.25 = coord(2/8)
    
    Abstract
    Using the CD-ROM version of the Science Citation Index 2010 (N = 3,705 journals), we study the (combined) effects of (a) fractional counting on the impact factor (IF) and (b) transformation of the skewed citation distributions into a distribution of 100 percentiles and six percentile rank classes (top-1%, top-5%, etc.). Do these approaches lead to field-normalized impact measures for journals? In addition to the 2-year IF (IF2), we consider the 5-year IF (IF5), the respective numerators of these IFs, and the number of Total Cites, counted both as integers and fractionally. These various indicators are tested against the hypothesis that the classification of journals into 11 broad fields by PatentBoard/NSF (National Science Foundation) provides statistically significant between-field effects. Using fractional counting the between-field variance is reduced by 91.7% in the case of IF5, and by 79.2% in the case of IF2. However, the differences in citation counts are not significantly affected by fractional counting. These results accord with previous studies, but the longer citation window of a fractionally counted IF5 can lead to significant improvement in the normalization across fields.
  4. Oppenheim, C.: Do citations count? : Citation indexing and the Research Assessment Exercise (RAE) (1996) 0.02
    0.02339217 = product of:
      0.09356868 = sum of:
        0.037798867 = weight(_text_:libraries in 6673) [ClassicSimilarity], result of:
          0.037798867 = score(doc=6673,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.29036054 = fieldWeight in 6673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.0625 = fieldNorm(doc=6673)
        0.055769812 = weight(_text_:studies in 6673) [ClassicSimilarity], result of:
          0.055769812 = score(doc=6673,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.35269377 = fieldWeight in 6673, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0625 = fieldNorm(doc=6673)
      0.25 = coord(2/8)
    
    Abstract
    Citations are used to illustrate or elaborate on a point, or to criticize. Citation studies, based on ISI's citation indexes, can help evaluate scientific research, while impact factors aid libraries in deciding which journals to cancel or purchase. Suggests that citiation counts can replace the costly RAE in assessing the research output of university departments
  5. Meho, L.I.; Sonnenwald, D.H.: Citation ranking versus peer evaluation of senior faculty research performance : a case study of Kurdish scholarship (2000) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 4382) [ClassicSimilarity], result of:
          0.05077526 = score(doc=4382,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 4382, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4382)
        0.04182736 = weight(_text_:studies in 4382) [ClassicSimilarity], result of:
          0.04182736 = score(doc=4382,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 4382, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4382)
      0.25 = coord(2/8)
    
    Abstract
    The purpose of this study is to analyze the relationship between citation ranking and peer evaluation in assessing senior faculty research performance. Other studies typically derive their peer evaluation data directly from referees, often in the form of ranking. This study uses two additional sources of peer evaluation data: citation contant analysis and book review content analysis. 2 main questions are investigated: (a) To what degree does citation ranking correlate with data from citation content analysis, book reviews and peer ranking? (b) Is citation ranking a valif evaluative indicator of research performance of senior faculty members? This study shows that citation ranking can provide a valid indicator for comparative evaluation of senior faculty research performance
  6. Chen, C.; Cribbin, T.; Macredie, R.; Morar, S.: Visualizing and tracking the growth of competing paradigms : two case studies (2002) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 602) [ClassicSimilarity], result of:
          0.05077526 = score(doc=602,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=602)
        0.04182736 = weight(_text_:studies in 602) [ClassicSimilarity], result of:
          0.04182736 = score(doc=602,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 602, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=602)
      0.25 = coord(2/8)
    
  7. Nederhof, A.J.; Visser, M.S.: Quantitative deconstruction of citation impact indicators : waxing field impact but waning journal impact (2004) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 4419) [ClassicSimilarity], result of:
          0.05077526 = score(doc=4419,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 4419, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4419)
        0.04182736 = weight(_text_:studies in 4419) [ClassicSimilarity], result of:
          0.04182736 = score(doc=4419,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 4419, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4419)
      0.25 = coord(2/8)
    
    Abstract
    In two case studies of research units, reference values used to benchmark research performance appeared to show contradictory results: the average citation level in the subfields (FCSm) increased world-wide, while the citation level of the journals (JCSm) decreased, where concomitant changes were expected. Explanations were sought in: a shift in preference of document types; a change in publication preference for subfields; and changes in journal coverage. Publishing in newly covered journals with a low impact had a negative effect on impact ratios. However, the main factor behind the increase in FCSm was the distribution of articles across the five-year block periods that were studied. Publication in lower impact journals produced a lagging JCSm. Actual values of JCSm, FCSm, and citations per publication (CPP) values are not very informative either about research performance, or about the development of impact over time in a certain subfield with block indicators. Normalized citation impact indicators are free from such effects and should be consulted primarily in research performance assessments.
  8. Shibata, N.; Kajikawa, Y.; Matsushima, K.: Topological analysis of citation networks to discover the future core articles (2007) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 286) [ClassicSimilarity], result of:
          0.05077526 = score(doc=286,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 286, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=286)
        0.04182736 = weight(_text_:studies in 286) [ClassicSimilarity], result of:
          0.04182736 = score(doc=286,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 286, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=286)
      0.25 = coord(2/8)
    
    Abstract
    In this article, we investigated the factors determining the capability of academic articles to be cited in the future using a topological analysis of citation networks. The basic idea is that articles that will have many citations were in a "similar" position topologically in the past. To validate this hypothesis, we investigated the correlation between future times cited and three measures of centrality: clustering centrality, closeness centrality, and betweenness centrality. We also analyzed the effect of aging as well as of self-correlation of times cited. Case studies were performed in the two following recent representative innovations: Gallium Nitride and Complex Networks. The results suggest that times cited is the main factor in explaining the near future times cited, and betweenness centrality is correlated with the distant future times cited. The effect of topological position on the capability to be cited is influenced by the migrating phenomenon in which the activated center of research shifts from an existing domain to a new emerging domain.
  9. Marshakova-Shaikevich, I.: Bibliometric maps of field of science (2005) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 1069) [ClassicSimilarity], result of:
          0.05077526 = score(doc=1069,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 1069, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=1069)
        0.04182736 = weight(_text_:studies in 1069) [ClassicSimilarity], result of:
          0.04182736 = score(doc=1069,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 1069, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1069)
      0.25 = coord(2/8)
    
    Abstract
    The present paper is devoted to two directions in algorithmic classificatory procedures: the journal co-citation analysis as an example of citation networks and lexical analysis of keywords in the titles and texts. What is common to those approaches is the general idea of normalization of deviations of the observed data from the mathematical expectation. The application of the same formula leads to discovery of statistically significant links between objects (journals in one case, keywords - in the other). The results of the journal co-citation analysis are reflected in tables and map for field "Women's Studies" and for field "Information Science and Library Science". An experimental attempt at establishing textual links between words was carried out on two samples from SSCI Data base: (1) EDUCATION and (2) ETHICS. The EDUCATION file included 2180 documents (of which 751 had abstracts); the ETHICS file included 807 documents (289 abstracts). Some examples of the results of this pilot study are given in tabular form . The binary links between words discovered in this way may form triplets or other groups with more than two member words.
  10. Thelwall, M.; Klitkou, A.; Verbeek, A.; Stuart, D.; Vincent, C.: Policy-relevant Webometrics for individual scientific fields (2010) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 3574) [ClassicSimilarity], result of:
          0.05077526 = score(doc=3574,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 3574, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=3574)
        0.04182736 = weight(_text_:studies in 3574) [ClassicSimilarity], result of:
          0.04182736 = score(doc=3574,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 3574, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=3574)
      0.25 = coord(2/8)
    
    Abstract
    Despite over 10 years of research there is no agreement on the most suitable roles for Webometric indicators in support of research policy and almost no field-based Webometrics. This article partly fills these gaps by analyzing the potential of policy-relevant Webometrics for individual scientific fields with the help of 4 case studies. Although Webometrics cannot provide robust indicators of knowledge flows or research impact, it can provide some evidence of networking and mutual awareness. The scope of Webometrics is also relatively wide, including not only research organizations and firms but also intermediary groups like professional associations, Web portals, and government agencies. Webometrics can, therefore, provide evidence about the research process to compliment peer review, bibliometric, and patent indicators: tracking the early, mainly prepublication development of new fields and research funding initiatives, assessing the role and impact of intermediary organizations and the need for new ones, and monitoring the extent of mutual awareness in particular research areas.
  11. Albarrán, P.; Perianes-Rodríguez, A.; Ruiz-Castillo, J.: Differences in citation impact across countries (2015) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 1665) [ClassicSimilarity], result of:
          0.05077526 = score(doc=1665,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 1665, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=1665)
        0.04182736 = weight(_text_:studies in 1665) [ClassicSimilarity], result of:
          0.04182736 = score(doc=1665,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 1665, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1665)
      0.25 = coord(2/8)
    
    Abstract
    Using a large data set, indexed by Thomson Reuters, consisting of 4.4 million articles published in 1998-2003 with a 5-year citation window for each year, this article studies country citation distributions for a partitioning of the world into 36 countries and two geographical areas in eight broad scientific fields and the all-sciences case. The two key findings are the following. First, country citation distributions are highly skewed and very similar to each other in all fields. Second, to a large extent, differences in country citation distributions can be accounted for by scale factors. The Empirical situation described in the article helps to understand why international comparisons of citation impact according to (a) mean citations and (b) the percentage of articles in each country belonging to the top 10% of the most cited articles are so similar to each other.
  12. Abramo, G.; D'Angelo, C.A.; Di Costa, F.: ¬A new approach to measure the scientific strengths of territories (2015) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 1852) [ClassicSimilarity], result of:
          0.05077526 = score(doc=1852,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 1852, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=1852)
        0.04182736 = weight(_text_:studies in 1852) [ClassicSimilarity], result of:
          0.04182736 = score(doc=1852,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 1852, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1852)
      0.25 = coord(2/8)
    
    Abstract
    The current work applies a method for mapping the supply of new knowledge from public research organizations, in this case from Italian institutions at the level of regions and provinces (NUTS2 and NUTS3). Through the analysis of scientific production indexed in the Web of Science for the years 2006-2010, the new knowledge is classified in subject categories and mapped according to an algorithm for the reconciliation of authors' affiliations. Unlike other studies in the literature based on simple counting of publications, the present study adopts an indicator, Scientific Strength, which takes account of both the quantity of scientific production and its impact on the advancement of knowledge. The differences in the results that arise from the 2 approaches are examined. The results of works of this kind can inform public research policies, at national and local levels, as well as the localization strategies of research-based companies.
  13. Rotolo, D.; Rafols, I.; Hopkins, M.M.; Leydesdorff, L.: Strategic intelligence on emerging technologies : scientometric overlay mapping (2017) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 3322) [ClassicSimilarity], result of:
          0.05077526 = score(doc=3322,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 3322, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=3322)
        0.04182736 = weight(_text_:studies in 3322) [ClassicSimilarity], result of:
          0.04182736 = score(doc=3322,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 3322, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=3322)
      0.25 = coord(2/8)
    
    Abstract
    This paper examines the use of scientometric overlay mapping as a tool of "strategic intelligence" to aid the governing of emerging technologies. We develop an integrative synthesis of different overlay mapping techniques and associated perspectives on technological emergence across geographical, social, and cognitive spaces. To do so, we longitudinally analyze (with publication and patent data) three case studies of emerging technologies in the medical domain. These are RNA interference (RNAi), human papillomavirus (HPV) testing technologies for cervical cancer, and thiopurine methyltransferase (TPMT) genetic testing. Given the flexibility (i.e., adaptability to different sources of data) and granularity (i.e., applicability across multiple levels of data aggregation) of overlay mapping techniques, we argue that these techniques can favor the integration and comparison of results from different contexts and cases, thus potentially functioning as a platform for "distributed" strategic intelligence for analysts and decision makers.
  14. Perez-Molina, E.: ¬The role of patent citations as a footprint of technology (2018) 0.02
    0.023150655 = product of:
      0.09260262 = sum of:
        0.05077526 = weight(_text_:case in 4187) [ClassicSimilarity], result of:
          0.05077526 = score(doc=4187,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.29144385 = fieldWeight in 4187, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=4187)
        0.04182736 = weight(_text_:studies in 4187) [ClassicSimilarity], result of:
          0.04182736 = score(doc=4187,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 4187, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=4187)
      0.25 = coord(2/8)
    
    Abstract
    The fact that patents are documents highly constrained by law and structured by international treaties make them a unique body of publications for tracing the history and evolution of technology. The distinctiveness of prior art patent citations compared to bibliographic references in the nonpatent literature is discussed. Starting from these observations and using the patent classification scheme as a framework of reference, we have identified a data structure, the "technology footprint," derived from the patents cited as prior art for a selected set of patents. This data structure will provide us with dynamic information about the technological components of the selected set of patents, which represents a technology, company, or inventor. Two case studies are presented in order to illustrate the visualization of the technology footprint: one concerning an inventor-Mr. Engelbart, the inventor of the "computer mouse"-and another concerning the early years of a technology-computerized tomography.
  15. Thelwall, M.: Extracting macroscopic information from Web links (2001) 0.02
    0.022901682 = product of:
      0.09160673 = sum of:
        0.042312715 = weight(_text_:case in 6851) [ClassicSimilarity], result of:
          0.042312715 = score(doc=6851,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 6851, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6851)
        0.049294014 = weight(_text_:studies in 6851) [ClassicSimilarity], result of:
          0.049294014 = score(doc=6851,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.3117402 = fieldWeight in 6851, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6851)
      0.25 = coord(2/8)
    
    Abstract
    Much has been written about the potential and pitfalls of macroscopic Web-based link analysis, yet there have been no studies that have provided clear statistical evidence that any of the proposed calculations can produce results over large areas of the Web that correlate with phenomena external to the Internet. This article attempts to provide such evidence through an evaluation of Ingwersen's (1998) proposed external Web Impact Factor (WIF) for the original use of the Web: the interlinking of academic research. In particular, it studies the case of the relationship between academic hyperlinks and research activity for universities in Britain, a country chosen for its variety of institutions and the existence of an official government rating exercise for research. After reviewing the numerous reasons why link counts may be unreliable, it demonstrates that four different WIFs do, in fact, correlate with the conventional academic research measures. The WIF delivering the greatest correlation with research rankings was the ratio of Web pages with links pointing at research-based pages to faculty numbers. The scarcity of links to electronic academic papers in the data set suggests that, in contrast to citation analysis, this WIF is measuring the reputations of universities and their scholars, rather than the quality of their publications
  16. Järvelin, K.; Vakkari, P.: LIS research across 50 years: content analysis of journal articles : offering an information-centric conception of memes (2022) 0.02
    0.022901682 = product of:
      0.09160673 = sum of:
        0.042312715 = weight(_text_:case in 949) [ClassicSimilarity], result of:
          0.042312715 = score(doc=949,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.24286987 = fieldWeight in 949, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.0390625 = fieldNorm(doc=949)
        0.049294014 = weight(_text_:studies in 949) [ClassicSimilarity], result of:
          0.049294014 = score(doc=949,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.3117402 = fieldWeight in 949, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.0390625 = fieldNorm(doc=949)
      0.25 = coord(2/8)
    
    Abstract
    Purpose This paper analyses the research in Library and Information Science (LIS) and reports on (1) the status of LIS research in 2015 and (2) on the evolution of LIS research longitudinally from 1965 to 2015. Design/methodology/approach The study employs a quantitative intellectual content analysis of articles published in 30+ scholarly LIS journals, following the design by Tuomaala et al. (2014). In the content analysis, we classify articles along eight dimensions covering topical content and methodology. Findings The topical findings indicate that the earlier strong LIS emphasis on L&I services has declined notably, while scientific and professional communication has become the most popular topic. Information storage and retrieval has given up its earlier strong position towards the end of the years analyzed. Individuals are increasingly the units of observation. End-user's and developer's viewpoints have strengthened at the cost of intermediaries' viewpoint. LIS research is methodologically increasingly scattered since survey, scientometric methods, experiment, case studies and qualitative studies have all gained in popularity. Consequently, LIS may have become more versatile in the analysis of its research objects during the years analyzed. Originality/value Among quantitative intellectual content analyses of LIS research, the study is unique in its scope: length of analysis period (50 years), width (8 dimensions covering topical content and methodology) and depth (the annual batch of 30+ scholarly journals).
  17. Sen, B.K.: Ranganathan's contribution to bibliometrics (2015) 0.02
    0.02273238 = product of:
      0.09092952 = sum of:
        0.04910217 = weight(_text_:libraries in 2790) [ClassicSimilarity], result of:
          0.04910217 = score(doc=2790,freq=6.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.3771894 = fieldWeight in 2790, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=2790)
        0.04182736 = weight(_text_:studies in 2790) [ClassicSimilarity], result of:
          0.04182736 = score(doc=2790,freq=2.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.26452032 = fieldWeight in 2790, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=2790)
      0.25 = coord(2/8)
    
    Abstract
    Traces the origin of the term librametry. Shows how librametry has helped Ranganathan to develop the staff formula for different libraries, and it can help in decision making relating to the establishment of rural and branch libraries; dormitory and service libraries. His maintenance of statistics of various library activities showed the growth pattern of library collection, use of the collection by users, busy and very busy hours in the circulations and reference sections, and so on. He also developed a method for optimal procurement of books for every department in the university. Ranganathan also showed statistically that on average Colon class numbers are shorter than DC class numbers. With the passage of time bibliometrics overshadowed librametrics. Ranganathan did not define librametrics, neither he isolated its components. The lacunae have been filled in this article. It has also been shown that a substantial part of librametrics is occupied by bibliometrics.
    Source
    Annals of library and information studies. 62(2015) no.4, S.222-225
  18. Schneider, J.W.; Borlund, P.: Matrix comparison, part 1 : motivation and important issues for measuring the resemblance between proximity measures or ordination results (2007) 0.02
    0.022404997 = product of:
      0.08961999 = sum of:
        0.033850174 = weight(_text_:case in 584) [ClassicSimilarity], result of:
          0.033850174 = score(doc=584,freq=2.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.1942959 = fieldWeight in 584, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03125 = fieldNorm(doc=584)
        0.055769812 = weight(_text_:studies in 584) [ClassicSimilarity], result of:
          0.055769812 = score(doc=584,freq=8.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.35269377 = fieldWeight in 584, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03125 = fieldNorm(doc=584)
      0.25 = coord(2/8)
    
    Abstract
    The present two-part article introduces matrix comparison as a formal means of evaluation in informetric studies such as cocitation analysis. In this first part, the motivation behind introducing matrix comparison to informetric studies, as well as two important issues influencing such comparisons, are introduced and discussed. The motivation is spurred by the recent debate on choice of proximity measures and their potential influence upon clustering and ordination results. The two important issues discussed here are matrix generation and the composition of proximity measures. The approach to matrix generation is demonstrated for the same data set, i.e., how data is represented and transformed in a matrix, evidently determines the behavior of proximity measures. Two different matrix generation approaches, in all probability, will lead to different proximity rankings of objects, which further lead to different ordination and clustering results for the same set of objects. Further, a resemblance in the composition of formulas indicates whether two proximity measures may produce similar ordination and clustering results. However, as shown in the case of the angular correlation and cosine measures, a small deviation in otherwise similar formulas can lead to different rankings depending on the contour of the data matrix transformed. Eventually, the behavior of proximity measures, that is whether they produce similar rankings of objects, is more or less data-specific. Consequently, the authors recommend the use of empirical matrix comparison techniques for individual studies to investigate the degree of resemblance between proximity measures or their ordination results. In part two of the article, the authors introduce and demonstrate two related statistical matrix comparison techniques the Mantel test and Procrustes analysis, respectively. These techniques can compare and evaluate the degree of monotonicity between different proximity measures or their ordination results. As such, the Mantel test and Procrustes analysis can be used as statistical validation tools in informetric studies and thus help choosing suitable proximity measures.
  19. Egghe, L.; Rousseau, R.: Averaging and globalising quotients of informetric and scientometric data (1996) 0.02
    0.021978518 = product of:
      0.08791407 = sum of:
        0.071807064 = weight(_text_:case in 7659) [ClassicSimilarity], result of:
          0.071807064 = score(doc=7659,freq=4.0), product of:
            0.1742197 = queryWeight, product of:
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.03962768 = queryNorm
            0.41216385 = fieldWeight in 7659, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.3964143 = idf(docFreq=1480, maxDocs=44218)
              0.046875 = fieldNorm(doc=7659)
        0.01610701 = product of:
          0.03221402 = sum of:
            0.03221402 = weight(_text_:22 in 7659) [ClassicSimilarity], result of:
              0.03221402 = score(doc=7659,freq=2.0), product of:
                0.13876937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03962768 = queryNorm
                0.23214069 = fieldWeight in 7659, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=7659)
          0.5 = coord(1/2)
      0.25 = coord(2/8)
    
    Abstract
    It is possible, using ISI's Journal Citation Report (JCR), to calculate average impact factors (AIF) for LCR's subject categories but it can be more useful to know the global Impact Factor (GIF) of a subject category and compare the 2 values. Reports results of a study to compare the relationships between AIFs and GIFs of subjects, based on the particular case of the average impact factor of a subfield versus the impact factor of this subfield as a whole, the difference being studied between an average of quotients, denoted as AQ, and a global average, obtained as a quotient of averages, and denoted as GQ. In the case of impact factors, AQ becomes the average impact factor of a field, and GQ becomes its global impact factor. Discusses a number of applications of this technique in the context of informetrics and scientometrics
    Source
    Journal of information science. 22(1996) no.3, S.165-170
  20. Snyder, H.; Cronin, B.; Davenport, E.: What's the use of citation? : Citation analysis as a literature topic in selected disciplines of the social sciences (1995) 0.02
    0.021875493 = product of:
      0.08750197 = sum of:
        0.02834915 = weight(_text_:libraries in 1825) [ClassicSimilarity], result of:
          0.02834915 = score(doc=1825,freq=2.0), product of:
            0.13017908 = queryWeight, product of:
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.03962768 = queryNorm
            0.2177704 = fieldWeight in 1825, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2850544 = idf(docFreq=4499, maxDocs=44218)
              0.046875 = fieldNorm(doc=1825)
        0.05915282 = weight(_text_:studies in 1825) [ClassicSimilarity], result of:
          0.05915282 = score(doc=1825,freq=4.0), product of:
            0.15812531 = queryWeight, product of:
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.03962768 = queryNorm
            0.37408823 = fieldWeight in 1825, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9902744 = idf(docFreq=2222, maxDocs=44218)
              0.046875 = fieldNorm(doc=1825)
      0.25 = coord(2/8)
    
    Abstract
    Reports results of a study to investigate the place and role of citation analysis in selected disciplines in the social sciences, including library and information science. 5 core library and information science periodicals: Journal of documentation; Library quarterly; Journal of the American Society for Information Science; College and research libraries; and the Journal of information science, were studed to determine the percentage of articles devoted to citation analysis and develop an indictive typology to categorize the major foci of research being conducted under the rubric of citation analysis. Similar analysis was conducted for periodicals in other social sciences disciplines. Demonstrates how the rubric can be used to dertermine how citatiion analysis is applied within library and information science and other disciplines. By isolating citation from bibliometrics in general, this work is differentiated from other, previous studies. Analysis of data from a 10 year sample of transdisciplinary social sciences literature suggests that 2 application areas predominate: the validity of citation as an evaluation tool; and impact or performance studies of authors, periodicals, and institutions

Authors

Years

Languages

  • e 434
  • d 9
  • dk 1
  • ro 1
  • More… Less…

Types

  • a 439
  • el 7
  • m 5
  • r 1
  • s 1
  • More… Less…