Search (54 results, page 1 of 3)

  • × theme_ss:"Informetrie"
  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  1. Egghe, L.: Type/Token-Taken informetrics (2003) 0.03
    0.029192839 = product of:
      0.058385678 = sum of:
        0.058385678 = product of:
          0.116771355 = sum of:
            0.116771355 = weight(_text_:e.g in 1608) [ClassicSimilarity], result of:
              0.116771355 = score(doc=1608,freq=6.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.49916416 = fieldWeight in 1608, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1608)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Type/Token-Taken informetrics is a new part of informetrics that studies the use of items rather than the items itself. Here, items are the objects that are produced by the sources (e.g., journals producing articles, authors producing papers, etc.). In linguistics a source is also called a type (e.g., a word), and an item a token (e.g., the use of words in texts). In informetrics, types that occur often, for example, in a database will also be requested often, for example, in information retrieval. The relative use of these occurrences will be higher than their relative occurrences itself; hence, the name Type/ Token-Taken informetrics. This article studies the frequency distribution of Type/Token-Taken informetrics, starting from the one of Type/Token informetrics (i.e., source-item relationships). We are also studying the average number my* of item uses in Type/Token-Taken informetrics and compare this with the classical average number my in Type/Token informetrics. We show that my* >= my always, and that my* is an increasing function of my. A method is presented to actually calculate my* from my, and a given a, which is the exponent in Lotka's frequency distribution of Type/Token informetrics. We leave open the problem of developing non-Lotkaian Type/TokenTaken informetrics.
  2. Egghe, L.: Relations between the continuous and the discrete Lotka power function (2005) 0.03
    0.028603025 = product of:
      0.05720605 = sum of:
        0.05720605 = product of:
          0.1144121 = sum of:
            0.1144121 = weight(_text_:e.g in 3464) [ClassicSimilarity], result of:
              0.1144121 = score(doc=3464,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.489079 = fieldWeight in 3464, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The discrete Lotka power function describes the number of sources (e.g., authors) with n = 1, 2, 3, ... items (e.g., publications). As in econometrics, informetrics theory requires functions of a continuous variable j, replacing the discrete variable n. Now j represents item densities instead of number of items. The continuous Lotka power function describes the density of sources with item density j. The discrete Lotka function one obtains from data, obtained empirically; the continuous Lotka function is the one needed when one wants to apply Lotkaian informetrics, i.e., to determine properties that can be derived from the (continuous) model. It is, hence, important to know the relations between the two models. We show that the exponents of the discrete Lotka function (if not too high, i.e., within limits encountered in practice) and of the continuous Lotka function are approximately the same. This is important to know in applying theoretical results (from the continuous model), derived from practical data.
  3. Meho, L.I.; Sugimoto, C.R.: Assessing the scholarly impact of information studies : a tale of two citation databases - Scopus and Web of Science (2009) 0.03
    0.028603025 = product of:
      0.05720605 = sum of:
        0.05720605 = product of:
          0.1144121 = sum of:
            0.1144121 = weight(_text_:e.g in 3298) [ClassicSimilarity], result of:
              0.1144121 = score(doc=3298,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.489079 = fieldWeight in 3298, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3298)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study uses citations, from 1996 to 2007, to the work of 80 randomly selected full-time, information studies (IS) faculty members from North America to examine differences between Scopus and Web of Science in assessing the scholarly impact of the field focusing on the most frequently citing journals, conference proceedings, research domains and institutions, as well as all citing countries. Results show that when assessment is limited to smaller citing entities (e.g., journals, conference proceedings, institutions), the two databases produce considerably different results, whereas when assessment is limited to larger citing entities (e.g., research domains, countries), the two databases produce very similar pictures of scholarly impact. In the former case, the use of Scopus (for journals and institutions) and both Scopus and Web of Science (for conference proceedings) is necessary to more accurately assess or visualize the scholarly impact of IS, whereas in the latter case, assessing or visualizing the scholarly impact of IS is independent of the database used.
  4. Rousseau, R.; Ye, F.Y.: ¬A proposal for a dynamic h-type index (2008) 0.03
    0.026967188 = product of:
      0.053934377 = sum of:
        0.053934377 = product of:
          0.10786875 = sum of:
            0.10786875 = weight(_text_:e.g in 2351) [ClassicSimilarity], result of:
              0.10786875 = score(doc=2351,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.4611081 = fieldWeight in 2351, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2351)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A time-dependent h-type indicator is proposed. This indicator depends on the size of the h-core, the number of citations received, and recent change in the value of the h-index. As such, it tries to combine in a dynamic way older information about the source (e.g., a scientist or research institute that is evaluated) with recent information.
  5. Marchant, T.: Score-based bibliometric rankings of authors (2009) 0.03
    0.026967188 = product of:
      0.053934377 = sum of:
        0.053934377 = product of:
          0.10786875 = sum of:
            0.10786875 = weight(_text_:e.g in 2849) [ClassicSimilarity], result of:
              0.10786875 = score(doc=2849,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.4611081 = fieldWeight in 2849, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2849)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Scoring rules (or score-based rankings or summation-based rankings) form a family of bibliometric rankings of authors such that authors are ranked according to the sum over all their publications of some partial scores. Many of these rankings are widely used (e.g., number of publications, weighted or not by the impact factor, by the number of authors, or by the number of citations). We present an axiomatic analysis of the family of all scoring rules and of some particular cases within this family.
  6. Nicolaisen, J.: Citation analysis (2007) 0.02
    0.0243019 = product of:
      0.0486038 = sum of:
        0.0486038 = product of:
          0.0972076 = sum of:
            0.0972076 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.0972076 = score(doc=6091,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 19:53:22
  7. Egghe, L.: Untangling Herdan's law and Heaps' law : mathematical and informetric arguments (2007) 0.02
    0.023835853 = product of:
      0.047671705 = sum of:
        0.047671705 = product of:
          0.09534341 = sum of:
            0.09534341 = weight(_text_:e.g in 271) [ClassicSimilarity], result of:
              0.09534341 = score(doc=271,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.40756583 = fieldWeight in 271, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=271)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Herdan's law in linguistics and Heaps' law in information retrieval are different formulations of the same phenomenon. Stated briefly and in linguistic terms they state that vocabularies' sizes are concave increasing power laws of texts' sizes. This study investigates these laws from a purely mathematical and informetric point of view. A general informetric argument shows that the problem of proving these laws is, in fact, ill-posed. Using the more general terminology of sources and items, the author shows by presenting exact formulas from Lotkaian informetrics that the total number T of sources is not only a function of the total number A of items, but is also a function of several parameters (e.g., the parameters occurring in Lotka's law). Consequently, it is shown that a fixed T(or A) value can lead to different possible A (respectively, T) values. Limiting the T(A)-variability to increasing samples (e.g., in a text as done in linguistics) the author then shows, in a purely mathematical way, that for large sample sizes T~ A**phi, where phi is a constant, phi < 1 but close to 1, hence roughly, Heaps' or Herdan's law can be proved without using any linguistic or informetric argument. The author also shows that for smaller samples, a is not a constant but essentially decreases as confirmed by practical examples. Finally, an exact informetric argument on random sampling in the items shows that, in most cases, T= T(A) is a concavely increasing function, in accordance with practical examples.
  8. Bornmann, L.; Mutz, R.; Daniel, H.D.: Do we need the h index and its variants in addition to standard bibliometric measures? (2009) 0.02
    0.023835853 = product of:
      0.047671705 = sum of:
        0.047671705 = product of:
          0.09534341 = sum of:
            0.09534341 = weight(_text_:e.g in 2861) [ClassicSimilarity], result of:
              0.09534341 = score(doc=2861,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.40756583 = fieldWeight in 2861, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2861)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this study, we investigate whether there is a need for the h index and its variants in addition to standard bibliometric measures (SBMs). Results from our recent study (L. Bornmann, R. Mutz, & H.-D. Daniel, 2008) have indicated that there are two types of indices: One type of indices (e.g., h index) describes the most productive core of a scientist's output and informs about the number of papers in the core. The other type of indices (e.g., a index) depicts the impact of the papers in the core. In evaluative bibliometric studies, the two dimensions quantity and quality of output are usually assessed using the SBMs number of publications (for the quantity dimension) and total citation counts (for the impact dimension). We additionally included the SBMs into the factor analysis. The results of the newly calculated analysis indicate that there is a high intercorrelation between number of publications and the indices that load substantially on the factor Quantity of the Productive Core as well as between total citation counts and the indices that load substantially on the factor Impact of the Productive Core. The high-loading indices and SBMs within one performance dimension could be called redundant in empirical application, as high intercorrelations between different indicators are a sign for measuring something similar (or the same). Based on our findings, we propose the use of any pair of indicators (one relating to the number of papers in a researcher's productive core and one relating to the impact of these core papers) as a meaningful approach for comparing scientists.
  9. Garfield, E.; Paris, S.W.; Stock, W.G.: HistCite(TM) : a software tool for informetric analysis of citation linkage (2006) 0.02
    0.02359629 = product of:
      0.04719258 = sum of:
        0.04719258 = product of:
          0.09438516 = sum of:
            0.09438516 = weight(_text_:e.g in 79) [ClassicSimilarity], result of:
              0.09438516 = score(doc=79,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.40346956 = fieldWeight in 79, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=79)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    HistCite(TM) is a software tool for analyzing and visualizing direct citation linkages between scientific papers. Its inputs are bibliographic records (with cited references) from "Web of Knowledge" or other sources. Its outputs are various tables and graphs with informetric indicators about the knowledge domain under study. As an example we analyze informetrically the literature about Alexius Meinong, an Austrian philosopher and psychologist. The article shortly discusses the informetric functionality of "Web of Knowledge" and shows broadly the possibilities that HistCite offers to its users (e.g. scientists, scientometricans and science journalists).
  10. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.021480048 = product of:
      0.042960096 = sum of:
        0.042960096 = product of:
          0.08592019 = sum of:
            0.08592019 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.08592019 = score(doc=3925,freq=4.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  11. Frandsen, T.F.; Rousseau, R.: Article impact calculated over arbitrary periods (2005) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 3264) [ClassicSimilarity], result of:
              0.08090157 = score(doc=3264,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 3264, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3264)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In this paper we address the various formulations of impact of articles, usually groups of articles as gauged by citations that these articles receive over a certain period of time. The journal impact factor, as published by ISI (Philadelphia, PA), is the best-known example of a formulation of impact of journals (considered as a set of articles) but many others have been defined in the literature. Impact factors have varying publication and citation periods and the chosen length of these periods enables, e.g., a distinction between synchronous and diachronous impact factors. It is shown how an impact factor for the general case can be defined. Two alternatives for a general impact factor are proposed, depending an whether different publication years are seen as a whole, and hence treating each one of them differently, or by operating with citation periods of identical length but allowing each publication period different starting points.
  12. Leydesdorff, L.; Bihui, J.: Mapping the Chinese Science Citation Database in terms of aggregated journal-journal citation relations (2005) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 4813) [ClassicSimilarity], result of:
              0.08090157 = score(doc=4813,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 4813, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4813)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Methods developed for mapping the journal structure contained in aggregated journal-journal citations in the Science Citation Index (SCI; Thomson ISI, 2002) are applied to the Chinese Science Citation Database of the Chinese Academy of Sciences. This database covered 991 journals in 2001, of which only 37 originally had English titles; only 31 of which were covered by the SCI. Using factor-analytical and graph-analytical techniques, the authors show that the journal relations are dually structured. The main structure is the intellectual organization of the journals in journal groups (as in the international SCI), but the university-based journals provide an institutional layer that orients this structure towards practical ends (e.g., agriculture). This mechanism of integration is further distinguished from the role of general science journals. The Chinese Science Citation Database thus exhibits the characteristics of "Mode 2" or transdisciplinary science in the production of scientific knowledge more than its Western counterpart does. The contexts of application lead to correlation among the components.
  13. Vinkler, P.: ¬The institutionalization of scientific information : a scientometric model (ISI-S Model) (2002) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 817) [ClassicSimilarity], result of:
              0.08090157 = score(doc=817,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 817, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=817)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    A scientometric model (ISI-S model) is introduced for describing the institutionalization process of scientific information. The central concept of ISI-S is that the scientific information published may develop with time through permanent evaluation and modification processes toward a cognitive consensus of distinguished authors of the respective scientific field or discipline. ISI-S describes the information and knowledge systems of science as a global network of interdependent information and knowledge clusters that are dynamically changing by their content and size. ISI-S assumes sets of information with short- or long-term impact and information integrated into the basic scientific knowledge or common knowledge. The type of the information sources (e.g., lecture, journal paper, review, monograph, book, textbook, lexicon) and the length of the impact are related to the grade of institutionalization. References are considered as proofs of manifested impact. The relative and absolute development of scientific knowledge seems to be slower than the increase of the number of publications.
  14. Sidiropoulos, A.; Manolopoulos, Y.: ¬A new perspective to automatically rank scientific conferences using digital libraries (2005) 0.02
    0.020225393 = product of:
      0.040450785 = sum of:
        0.040450785 = product of:
          0.08090157 = sum of:
            0.08090157 = weight(_text_:e.g in 1011) [ClassicSimilarity], result of:
              0.08090157 = score(doc=1011,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.34583107 = fieldWeight in 1011, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1011)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Citation analysis is performed in order to evaluate authors and scientific collections, such as journals and conference proceedings. Currently, two major systems exist that perform citation analysis: Science Citation Index (SCI) by the Institute for Scientific Information (ISI) and CiteSeer by the NEC Research Institute. The SCI, mostly a manual system up until recently, is based on the notion of the ISI Impact Factor, which has been used extensively for citation analysis purposes. On the other hand the CiteSeer system is an automatically built digital library using agents technology, also based on the notion of ISI Impact Factor. In this paper, we investigate new alternative notions besides the ISI impact factor, in order to provide a novel approach aiming at ranking scientific collections. Furthermore, we present a web-based system that has been built by extracting data from the Databases and Logic Programming (DBLP) website of the University of Trier. Our system, by using the new citation metrics, emerges as a useful tool for ranking scientific collections. In this respect, some first remarks are presented, e.g. on ranking conferences related to databases.
  15. White, H.D.; Wellman, B.; Nazer, N.: Does Citation Reflect Social Structure? : Longitudinal Evidence From the "Globenet" Interdisciplinary Research Group (2004) 0.02
    0.019068683 = product of:
      0.038137365 = sum of:
        0.038137365 = product of:
          0.07627473 = sum of:
            0.07627473 = weight(_text_:e.g in 2095) [ClassicSimilarity], result of:
              0.07627473 = score(doc=2095,freq=4.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.32605267 = fieldWeight in 2095, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2095)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Many authors have posited a social component in citation, the consensus being that the citers and citees often have interpersonal as well as intellectual ties. Evidence for this belief has been rather meager, however, in part because social networks researchers have lacked bibliometric data (e.g., pairwise citation counts from online databases), and citation analysts have lacked sociometric data (e.g., pairwise measures of acquaintanceship). In 1997 Nazer extensively measured personal relationships and communication behaviors in what we call "Globenet," an international group of 16 researchers from seven disciplines that was established in 1993 to study human development. Since Globenet's membership is known, it was possible during 2002 to obtain citation records for all members in databases of the Institute for Scientific Information. This permitted examination of how members cited each other (intercited) in journal articles over the past three decades and in a 1999 book to which they all contributed. It was also possible to explore links between the intercitation data and the social and communication data. Using network-analytic techniques, we look at the growth of intercitation over time, the extent to which it follows disciplinary or interdisciplinary lines, whether it covaries with degrees of acquaintanceship, whether it reflects Globenet's organizational structure, whether it is associated with particular in-group communication patterns, and whether it is related to the cocitation of Globenet members. Results show cocitation to be a powerful predictor of intercitation in the journal articles, while being an editor or co-author is an important predictor in the book. Intellectual ties based an shared content did better as predictors than content-neutral social ties like friendship. However, interciters in Globenet communicated more than did noninterciters.
  16. Lewison, G.: ¬The work of the Bibliometrics Research Group (City University) and associates (2005) 0.02
    0.018226424 = product of:
      0.03645285 = sum of:
        0.03645285 = product of:
          0.0729057 = sum of:
            0.0729057 = weight(_text_:22 in 4890) [ClassicSimilarity], result of:
              0.0729057 = score(doc=4890,freq=2.0), product of:
                0.15702912 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044842023 = queryNorm
                0.46428138 = fieldWeight in 4890, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4890)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2007 17:02:22
  17. Huber, J.C.: ¬A new method for analyzing scientific productivity (2001) 0.02
    0.016854495 = product of:
      0.03370899 = sum of:
        0.03370899 = product of:
          0.06741798 = sum of:
            0.06741798 = weight(_text_:e.g in 6845) [ClassicSimilarity], result of:
              0.06741798 = score(doc=6845,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.28819257 = fieldWeight in 6845, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6845)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Previously, a new method for measuring scientific productivity was demonstrated for authors in mathematical logic and some subareas of 19th-century physics. The purpose of this article is to apply this new method to other fields to support its general applicability. We show that the method yields the same results for modern physicists, biologists, psychologists, inventors, and composers. That is, each individual's production is constant over time, and the time-period fluctuations follow the Poisson distribution. However, the productivity (e.g., papers per year) varies widely across individuals. We show that the distribution of productivity does not follow the normal (i.e., bell curve) distribution, but rather follows the exponential distribution. Thus, most authors produce at the lowest rate and very few authors produce at the higher rates. We also show that the career duration of individuals follows the exponential distribution. Thus, most authors have a very short career and very few have a long career. The principal advantage of the new method is that the detail structure of author productivity can be examined, such as trends, etc. Another advantage is that information science studies have guidance for the length of time interval being examined and estimating when an author's entire body of work has been recorded.
  18. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.02
    0.016854495 = product of:
      0.03370899 = sum of:
        0.03370899 = product of:
          0.06741798 = sum of:
            0.06741798 = weight(_text_:e.g in 4279) [ClassicSimilarity], result of:
              0.06741798 = score(doc=4279,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.28819257 = fieldWeight in 4279, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4279)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  19. White, H.D.; Boell, S.K.; Yu, H.; Davis, M.; Wilson, C.S.; Cole, F.T.H.: Libcitations : a measure for comparative assessment of book publications in the humanities and social sciences (2009) 0.02
    0.016854495 = product of:
      0.03370899 = sum of:
        0.03370899 = product of:
          0.06741798 = sum of:
            0.06741798 = weight(_text_:e.g in 2846) [ClassicSimilarity], result of:
              0.06741798 = score(doc=2846,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.28819257 = fieldWeight in 2846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2846)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Bibliometric measures for evaluating research units in the book-oriented humanities and social sciences are underdeveloped relative to those available for journal-oriented science and technology. We therefore present a new measure designed for book-oriented fields: the libcitation count. This is a count of the libraries holding a given book, as reported in a national or international union catalog. As librarians decide what to acquire for the audiences they serve, they jointly constitute an instrument for gauging the cultural impact of books. Their decisions are informed by knowledge not only of audiences but also of the book world (e.g., the reputations of authors and the prestige of publishers). From libcitation counts, measures can be derived for comparing research units. Here, we imagine a match-up between the departments of history, philosophy, and political science at the University of New South Wales and the University of Sydney in Australia. We chose the 12 books from each department that had the highest libcitation counts in the Libraries Australia union catalog during 2000 to 2006. We present each book's raw libcitation count, its rank within its Library of Congress (LC) class, and its LC-class normalized libcitation score. The latter is patterned on the item-oriented field normalized citation score used in evaluative bibliometrics. Summary statistics based on these measures allow the departments to be compared for cultural impact. Our work has implications for programs such as Excellence in Research for Australia and the Research Assessment Exercise in the United Kingdom. It also has implications for data mining in OCLC's WorldCat.
  20. Jeong, S.; Lee, S.; Kim, H.-G.: Are you an invited speaker? : a bibliometric analysis of elite groups for scholarly events in bioinformatics (2009) 0.02
    0.016854495 = product of:
      0.03370899 = sum of:
        0.03370899 = product of:
          0.06741798 = sum of:
            0.06741798 = weight(_text_:e.g in 2847) [ClassicSimilarity], result of:
              0.06741798 = score(doc=2847,freq=2.0), product of:
                0.23393378 = queryWeight, product of:
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.044842023 = queryNorm
                0.28819257 = fieldWeight in 2847, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2168427 = idf(docFreq=651, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2847)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Participating in scholarly events (e.g., conferences, workshops, etc.) as an elite-group member such as an organizing committee chair or member, program committee chair or member, session chair, invited speaker, or award winner is beneficial to a researcher's career development. The objective of this study is to investigate whether elite-group membership for scholarly events is representative of scholars' prominence, and which elite group is the most prestigious. We collected data about 15 global (excluding regional) bioinformatics scholarly events held in 2007. We sampled (via stratified random sampling) participants from elite groups in each event. Then, bibliometric indicators (total citations and h index) of seven elite groups and a non-elite group, consisting of authors who submitted at least one paper to an event but were not included in any elite group, were observed using the Scopus Citation Tracker. The Kruskal-Wallis test was performed to examine the differences among the eight groups. Multiple comparison tests (Dwass, Steel, Critchlow-Fligner) were conducted as follow-up procedures. The experimental results reveal that scholars in an elite group have better performance in bibliometric indicators than do others. Among the elite groups, the invited speaker group has statistically significantly the best performance while the other elite-group types are not significantly distinguishable. From this analysis, we confirm that elite-group membership in scholarly events, at least in the field of bioinformatics, can be utilized as an alternative marker for a scholar's prominence, with invited speaker being the most important prominence indicator among the elite groups.