Search (1118 results, page 55 of 56)

  • × language_ss:"e"
  • × theme_ss:"Informetrie"
  1. Leydesdorff, L.; Opthof, T.: Scopus's source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations (2010) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 4107) [ClassicSimilarity], result of:
          0.005354538 = score(doc=4107,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 4107, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4107)
      0.16666667 = coord(1/6)
    
    Abstract
    Impact factors (and similar measures such as the Scimago Journal Rankings) suffer from two problems: (a) citation behavior varies among fields of science and, therefore, leads to systematic differences, and (b) there are no statistics to inform us whether differences are significant. The recently introduced "source normalized impact per paper" indicator of Scopus tries to remedy the first of these two problems, but a number of normalization decisions are involved, which makes it impossible to test for significance. Using fractional counting of citations-based on the assumption that impact is proportionate to the number of references in the citing documents-citations can be contextualized at the paper level and aggregated impacts of sets can be tested for their significance. It can be shown that the weighted impact of Annals of Mathematics (0.247) is not so much lower than that of Molecular Cell (0.386) despite a five-f old difference between their impact factors (2.793 and 13.156, respectively).
  2. Marx, W.: Special features of historical papers from the viewpoint of bibliometrics (2011) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 4133) [ClassicSimilarity], result of:
          0.005354538 = score(doc=4133,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 4133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=4133)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper deals with the specific features of historical papers relevant for information retrieval and bibliometrics. The analysis is based mainly on the citation indexes accessible under the Web of Science (WoS) but also on field-specific databases: the Chemical Abstracts Service (CAS) literature database and the INSPEC database. First, the journal coverage of the WoS (in particular of the WoS Century of Science archive), the limitations of specific search fields as well as several database errors are discussed. Then, the problem of misspelled citations and their "mutations" is demonstrated by a few typical examples. Complex author names, complicated journal names, and other sources of errors that result from prior citation practice are further issues. Finally, some basic phenomena limiting the meaning of citation counts of historical papers are presented and explained.
  3. Kuan, C.-H.; Huang, M.-H.; Chen, D.-Z.: ¬A two-dimensional approach to performance evaluation for a large number of research institutions (2012) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 58) [ClassicSimilarity], result of:
          0.005354538 = score(doc=58,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 58, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=58)
      0.16666667 = coord(1/6)
    
    Abstract
    We characterize the research performance of a large number of institutions in a two-dimensional coordinate system based on the shapes of their h-cores so that their relative performance can be conveniently observed and compared. The 2D distribution of these institutions is then utilized (1) to categorize the institutions into a number of qualitative groups revealing the nature of their performance, and (2) to determine the position of a specific institution among the set of institutions. The method is compared with some major h-type indices and tested with empirical data using clinical medicine as an illustrative case. The method is extensible to the research performance evaluation at other aggregation levels such as researchers, journals, departments, and nations.
  4. García, J.A.; Rodriguez-Sánchez, R.; Fdez-Valdivia, J.: Scientific subject categories of Web of Knowledge ranked according to their multidimensional prestige of influential journals (2012) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 235) [ClassicSimilarity], result of:
          0.005354538 = score(doc=235,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 235, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=235)
      0.16666667 = coord(1/6)
    
    Abstract
    A journal may be considered as having dimension-specific prestige when its score, based on a given journal ranking model, exceeds a threshold value. But a journal has multidimensional prestige only if it is a prestigious journal with respect to a number of dimensions-e.g., Institute for Scientific Information Impact Factor, immediacy index, eigenfactor score, and article influence score. The multidimensional prestige of influential journals takes into account the fact that several prestige indicators should be used for a distinct analysis of the impact of scholarly journals in a subject category. After having identified the multidimensionally influential journals, their prestige scores can be aggregated to produce a summary measure of multidimensional prestige for a subject category, which satisfies numerous properties. Using this measure of multidimensional prestige to rank subject categories, we have found the top scientific subject categories of Web of Knowledge as of 2010.
  5. Ding, Y.; Yan, E.: Scholarly network similarities : how bibliographic coupling networks, citation networks, cocitation networks, topical networks, coauthorship networks, and coword networks relate to each other (2012) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 274) [ClassicSimilarity], result of:
          0.005354538 = score(doc=274,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 274, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=274)
      0.16666667 = coord(1/6)
    
    Abstract
    This study explores the similarity among six types of scholarly networks aggregated at the institution level, including bibliographic coupling networks, citation networks, cocitation networks, topical networks, coauthorship networks, and coword networks. Cosine distance is chosen to measure the similarities among the six networks. The authors found that topical networks and coauthorship networks have the lowest similarity; cocitation networks and citation networks have high similarity; bibliographic coupling networks and cocitation networks have high similarity; and coword networks and topical networks have high similarity. In addition, through multidimensional scaling, two dimensions can be identified among the six networks: Dimension 1 can be interpreted as citation-based versus noncitation-based, and Dimension 2 can be interpreted as social versus cognitive. The authors recommend the use of hybrid or heterogeneous networks to study research interaction and scholarly communications.
  6. Amez, L.: Citation measures at the micro level : influence of publication age, field, and uncitedness (2012) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 289) [ClassicSimilarity], result of:
          0.005354538 = score(doc=289,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 289, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=289)
      0.16666667 = coord(1/6)
    
    Abstract
    The application of micro-level citation indicators is not without controversy. The procedure requires the availability of both adequate data sets and trusted metrics. Few indicators have been developed to deal specifically with individual assessment. The h-type indices are the most popular category; however, the dependence of h-type metrics on publication age and field makes their application often unjustified. This article studies the effects that publication age and field normalization have on h-type citation values of German Leibniz Prize winners. This data set is exclusive in that it is highly scrutinized for homonyms. Results are compared with other field-normalized citation rates, contributing to the debate on using demarcation versus average citation approaches to evaluate top researchers.
  7. Leydesdorff, L.; Rotolo, D.; Rafols, I.: Bibliometric perspectives on medical innovation using the medical subject headings of PubMed (2012) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 494) [ClassicSimilarity], result of:
          0.005354538 = score(doc=494,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 494, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=494)
      0.16666667 = coord(1/6)
    
    Abstract
    Multiple perspectives on the nonlinear processes of medical innovations can be distinguished and combined using the Medical Subject Headings (MeSH) of the MEDLINE database. Focusing on three main branches-"diseases," "drugs and chemicals," and "techniques and equipment"-we use base maps and overlay techniques to investigate the translations and interactions and thus to gain a bibliometric perspective on the dynamics of medical innovations. To this end, we first analyze the MEDLINE database, the MeSH index tree, and the various options for a static mapping from different perspectives and at different levels of aggregation. Following a specific innovation (RNA interference) over time, the notion of a trajectory which leaves a signature in the database is elaborated. Can the detailed index terms describing the dynamics of research be used to predict the diffusion dynamics of research results? Possibilities are specified for further integration between the MEDLINE database on one hand, and the Science Citation Index and Scopus (containing citation information) on the other.
  8. Boyack, K.W.; Small, H.; Klavans, R.: Improving the accuracy of co-citation clustering using full text (2013) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 1036) [ClassicSimilarity], result of:
          0.005354538 = score(doc=1036,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 1036, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=1036)
      0.16666667 = coord(1/6)
    
    Abstract
    Historically, co-citation models have been based only on bibliographic information. Full-text analysis offers the opportunity to significantly improve the quality of the signals upon which these co-citation models are based. In this work we study the effect of reference proximity on the accuracy of co-citation clusters. Using a corpus of 270,521 full text documents from 2007, we compare the results of traditional co-citation clustering using only the bibliographic information to results from co-citation clustering where proximity between reference pairs is factored into the pairwise relationships. We find that accounting for reference proximity from full text can increase the textual coherence (a measure of accuracy) of a co-citation cluster solution by up to 30% over the traditional approach based on bibliographic information.
  9. Donner, P.: Enhanced self-citation detection by fuzzy author name matching and complementary error estimates (2016) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 2776) [ClassicSimilarity], result of:
          0.005354538 = score(doc=2776,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 2776, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2776)
      0.16666667 = coord(1/6)
    
    Abstract
    In this article I investigate the shortcomings of exact string match-based author self-citation detection methods. The contributions of this study are twofold. First, I apply a fuzzy string matching algorithm for self-citation detection and benchmark this approach and other common methods of exclusively author name-based self-citation detection against a manually curated ground truth sample. Near full recall can be achieved with the proposed method while incurring only negligible precision loss. Second, I report some important observations from the results about the extent of latent self-citations and their characteristics and give an example of the effect of improved self-citation detection on the document level self-citation rate of real data.
  10. Leginus, M.; Zhai, C.X.; Dolog, P.: Personalized generation of word clouds from tweets (2016) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 2886) [ClassicSimilarity], result of:
          0.005354538 = score(doc=2886,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 2886, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=2886)
      0.16666667 = coord(1/6)
    
    Abstract
    Active users of Twitter are often overwhelmed with the vast amount of tweets. In this work we attempt to help users browsing a large number of accumulated posts. We propose a personalized word cloud generation as a means for users' navigation. Various user past activities such as user published tweets, retweets, and seen but not retweeted tweets are leveraged for enhanced personalization of word clouds. The best personalization results are attained with user past retweets. However, users' own past tweets are not as useful as retweets for personalization. Negative preferences derived from seen but not retweeted tweets further enhance personalized word cloud generation. The ranking combination method outperforms the preranking approach and provides a general framework for combined ranking of various user past information for enhanced word cloud generation. To better capture subtle differences of generated word clouds, we propose an evaluation of word clouds with a mean average precision measure.
  11. Yan, E.: Disciplinary knowledge production and diffusion in science (2016) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 3092) [ClassicSimilarity], result of:
          0.005354538 = score(doc=3092,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 3092, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3092)
      0.16666667 = coord(1/6)
    
  12. Rotolo, D.; Rafols, I.; Hopkins, M.M.; Leydesdorff, L.: Strategic intelligence on emerging technologies : scientometric overlay mapping (2017) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 3322) [ClassicSimilarity], result of:
          0.005354538 = score(doc=3322,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 3322, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3322)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper examines the use of scientometric overlay mapping as a tool of "strategic intelligence" to aid the governing of emerging technologies. We develop an integrative synthesis of different overlay mapping techniques and associated perspectives on technological emergence across geographical, social, and cognitive spaces. To do so, we longitudinally analyze (with publication and patent data) three case studies of emerging technologies in the medical domain. These are RNA interference (RNAi), human papillomavirus (HPV) testing technologies for cervical cancer, and thiopurine methyltransferase (TPMT) genetic testing. Given the flexibility (i.e., adaptability to different sources of data) and granularity (i.e., applicability across multiple levels of data aggregation) of overlay mapping techniques, we argue that these techniques can favor the integration and comparison of results from different contexts and cases, thus potentially functioning as a platform for "distributed" strategic intelligence for analysts and decision makers.
  13. Orduna-Malea, E.; Thelwall, M.; Kousha, K.: Web citations in patents : evidence of technological impact? (2017) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 3764) [ClassicSimilarity], result of:
          0.005354538 = score(doc=3764,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 3764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3764)
      0.16666667 = coord(1/6)
    
  14. An, J.; Kim, N.; Kan, M.-Y.; Kumar Chandrasekaran, M.; Song, M.: Exploring characteristics of highly cited authors according to citation location and content (2017) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 3765) [ClassicSimilarity], result of:
          0.005354538 = score(doc=3765,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 3765, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=3765)
      0.16666667 = coord(1/6)
    
    Abstract
    Big Science and cross-disciplinary collaborations have reshaped the intellectual structure of research areas. A number of works have tried to uncover this hidden intellectual structure by analyzing citation contexts. However, none of them analyzed by document logical structures such as sections. The two major goals of this study are to find characteristics of authors who are highly cited section-wise and to identify the differences in section-wise author networks. This study uses 29,158 of research articles culled from the ACL Anthology, which hosts articles on computational linguistics and natural language processing. We find that the distribution of citations across sections is skewed and that a different set of highly cited authors share distinct academic characteristics, according to their citation locations. Furthermore, the author networks based on citation context similarity reveal that the intellectual structure of a domain differs across different sections.
  15. Tay, W.; Zhang, X.; Karimi , S.: Beyond mean rating : probabilistic aggregation of star ratings based on helpfulness (2020) 0.00
    8.9242304E-4 = product of:
      0.005354538 = sum of:
        0.005354538 = weight(_text_:in in 5917) [ClassicSimilarity], result of:
          0.005354538 = score(doc=5917,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.09017298 = fieldWeight in 5917, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5917)
      0.16666667 = coord(1/6)
    
    Abstract
    The star-rating mechanism of customer reviews is used universally by the online population to compare and select merchants, movies, products, and services. The consensus opinion from aggregation of star ratings is used as a proxy for item quality. Online reviews are noisy and effective aggregation of star ratings to accurately reflect the "true quality" of products and services is challenging. The mean-rating aggregation model is widely used and other aggregation models are also proposed. These existing aggregation models rely on a large number of reviews to tolerate noise. However, many products rarely have reviews. We propose probabilistic aggregation models for review ratings based on the Dirichlet distribution to combat data sparsity in reviews. We further propose to exploit the "helpfulness" social information and time to filter noisy reviews and effectively aggregate ratings to compute the consensus opinion. Our experiments on an Amazon data set show that our probabilistic aggregation models based on "helpfulness" achieve better performance than the statistical and heuristic baseline approaches.
  16. Kostoff, R.N.; Rio, J.A. del; Humenik, J.A.; Garcia, E.O.; Ramirez, A.M.: Citation mining : integrating text mining and bibliometrics for research user profiling (2001) 0.00
    8.413845E-4 = product of:
      0.005048307 = sum of:
        0.005048307 = weight(_text_:in in 6850) [ClassicSimilarity], result of:
          0.005048307 = score(doc=6850,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.08501591 = fieldWeight in 6850, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=6850)
      0.16666667 = coord(1/6)
    
    Abstract
    Identifying the users and impact of research is important for research performers, managers, evaluators, and sponsors. It is important to know whether the audience reached is the audience desired. It is useful to understand the technical characteristics of the other research/development/applications impacted by the originating research, and to understand other characteristics (names, organizations, countries) of the users impacted by the research. Because of the many indirect pathways through which fundamental research can impact applications, identifying the user audience and the research impacts can be very complex and time consuming. The purpose of this article is to describe a novel approach for identifying the pathways through which research can impact other research, technology development, and applications, and to identify the technical and infrastructure characteristics of the user population. A novel literature-based approach was developed to identify the user community and its characteristics. The research performed is characterized by one or more articles accessed by the Science Citation Index (SCI) database, beccause the SCI's citation-based structure enables the capability to perform citation studies easily. The user community is characterized by the articles in the SCI that cite the original research articles, and that cite the succeeding generations of these articles as well. Text mining is performed on the citing articles to identify the technical areas impacted by the research, the relationships among these technical areas, and relationships among the technical areas and the infrastructure (authors, journals, organizations). A key component of text mining, concept clustering, was used to provide both a taxonomy of the citing articles' technical themes and further technical insights based on theme relationships arising from the grouping process. Bibliometrics is performed on the citing articles to profile the user characteristics. Citation Mining, this integration of citation bibliometrics and text mining, is applied to the 307 first generation citing articles of a fundamental physics article on the dynamics of vibrating sand-piles. Most of the 307 citing articles were basic research whose main themes were aligned with those of the cited article. However, about 20% of the citing articles were research or development in other disciplines, or development within the same discipline. The text mining alone identified the intradiscipline applications and extradiscipline impacts and applications; this was confirmed by detailed reading of the 307 abstracts. The combination of citation bibliometrics and text mining provides a synergy unavailable with each approach taken independently. Furthermore, text mining is a REQUIREMENT for a feasible comprehensive research impact determination. The integrated multigeneration citation analysis required for broad research impact determination of highly cited articles will produce thousands or tens or hundreds of thousands of citing article Abstracts.
  17. Bornmann, L.; Daniel, H.-D.: Selecting manuscripts for a high-impact journal through peer review : a citation analysis of communications that were accepted by Angewandte Chemie International Edition, or rejected but published elsewhere (2008) 0.00
    8.413845E-4 = product of:
      0.005048307 = sum of:
        0.005048307 = weight(_text_:in in 2381) [ClassicSimilarity], result of:
          0.005048307 = score(doc=2381,freq=4.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.08501591 = fieldWeight in 2381, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.03125 = fieldNorm(doc=2381)
      0.16666667 = coord(1/6)
    
    Abstract
    All journals that use peer review have to deal with the following question: Does the peer review system fulfill its declared objective to select the best scientific work? We investigated the journal peer-review process at Angewandte Chemie International Edition (AC-IE), one of the prime chemistry journals worldwide, and conducted a citation analysis for Communications that were accepted by the journal (n = 878) or rejected but published elsewhere (n = 959). The results of negative binomial-regression models show that holding all other model variables constant, being accepted by AC-IE increases the expected number of citations by up to 50%. A comparison of average citation counts (with 95% confidence intervals) of accepted and rejected (but published elsewhere) Communications with international scientific reference standards was undertaken. As reference standards, (a) mean citation counts for the journal set provided by Thomson Reuters corresponding to the field chemistry and (b) specific reference standards that refer to the subject areas of Chemical Abstracts were used. When compared to reference standards, the mean impact on chemical research is for the most part far above average not only for accepted Communications but also for rejected (but published elsewhere) Communications. However, average and below-average scientific impact is to be expected significantly less frequently for accepted Communications than for rejected Communications. All in all, the results of this study confirm that peer review at AC-IE is able to select the best scientific work with the highest impact on chemical research.
    Content
    Vgl. auch: Erratum Re: Selecting manuscripts for a high-impact journal through peer review: A citation analysis of communications that were accepted by Agewandte Chemie International Edition, or rejected but published elsewhere. In: Journal of the American Society for Information Science and Technology 59(2008) no.12, S.2037-2038.
  18. Huber, J.C.: ¬A new method for analyzing scientific productivity (2001) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 6845) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=6845,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 6845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6845)
      0.16666667 = coord(1/6)
    
    Abstract
    Previously, a new method for measuring scientific productivity was demonstrated for authors in mathematical logic and some subareas of 19th-century physics. The purpose of this article is to apply this new method to other fields to support its general applicability. We show that the method yields the same results for modern physicists, biologists, psychologists, inventors, and composers. That is, each individual's production is constant over time, and the time-period fluctuations follow the Poisson distribution. However, the productivity (e.g., papers per year) varies widely across individuals. We show that the distribution of productivity does not follow the normal (i.e., bell curve) distribution, but rather follows the exponential distribution. Thus, most authors produce at the lowest rate and very few authors produce at the higher rates. We also show that the career duration of individuals follows the exponential distribution. Thus, most authors have a very short career and very few have a long career. The principal advantage of the new method is that the detail structure of author productivity can be examined, such as trends, etc. Another advantage is that information science studies have guidance for the length of time interval being examined and estimating when an author's entire body of work has been recorded.
  19. Thelwall, M.: Conceptualizing documentation on the Web : an evaluation of different heuristic-based models for counting links between university Web sites (2002) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 978) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=978,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 978, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=978)
      0.16666667 = coord(1/6)
    
    Abstract
    All known previous Web link studies have used the Web page as the primary indivisible source document for counting purposes. Arguments are presented to explain why this is not necessarily optimal and why other alternatives have the potential to produce better results. This is despite the fact that individual Web files are often the only choice if search engines are used for raw data and are the easiest basic Web unit to identify. The central issue is of defining the Web "document": that which should comprise the single indissoluble unit of coherent material. Three alternative heuristics are defined for the educational arena based upon the directory, the domain and the whole university site. These are then compared by implementing them an a set of 108 UK university institutional Web sites under the assumption that a more effective heuristic will tend to produce results that correlate more highly with institutional research productivity. It was discovered that the domain and directory models were able to successfully reduce the impact of anomalous linking behavior between pairs of Web sites, with the latter being the method of choice. Reasons are then given as to why a document model an its own cannot eliminate all anomalies in Web linking behavior. Finally, the results from all models give a clear confirmation of the very strong association between the research productivity of a UK university and the number of incoming links from its peers' Web sites.
  20. Yoshikane, F.; Kageura, K.; Tsuji, K.: ¬A method for the comparative analysis of concentration of author productivity, giving consideration to the effect of sample size dependency of statistical measures (2003) 0.00
    7.4368593E-4 = product of:
      0.0044621155 = sum of:
        0.0044621155 = weight(_text_:in in 5123) [ClassicSimilarity], result of:
          0.0044621155 = score(doc=5123,freq=2.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.07514416 = fieldWeight in 5123, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5123)
      0.16666667 = coord(1/6)
    
    Abstract
    Studies of the concentration of author productivity based upon counts of papers by individual authors will produce measures that change systematically with sample size. Yoshikane, Kageura, and Tsuji seek a statistical framework which will avoid this scale effect problem. Using the number of authors in a field as an absolute concentration measure, and Gini's index as a relative concentration measure, they describe four literatures form both viewpoints with measures insensitive to one another. Both measures will increase with sample size. They then plot profiles of the two measures on the basis of a Monte-Carlo simulation of 1000 trials for 20 equally spaced intervals and compare the characteristics of the literatures. Using data from conferences hosted by four academic societies between 1992 and 1997, they find a coefficient of loss exceeding 0.15 indicating measures will depend highly on sample size. The simulation shows that a larger sample size leads to lower absolute concentration and higher relative concentration. Comparisons made at the same sample size present quite different results than the original data and allow direct comparison of population characteristics.

Years

Types

  • a 1091
  • el 15
  • m 15
  • s 9
  • b 2
  • r 1
  • x 1
  • More… Less…