Search (100 results, page 2 of 5)

  • × theme_ss:"Informetrie"
  • × year_i:[2000 TO 2010}
  1. Katsaros, D.; Akritidis, L.; Bozanis, P.: ¬The f index : quantifying the impact of coterminal citations on scientists' ranking (2009) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 2805) [ClassicSimilarity], result of:
          0.0726894 = score(doc=2805,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 2805, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=2805)
      0.16666667 = coord(1/6)
    
  2. Leydesdorff, L.: How are new citation-based journal indicators adding to the bibliometric toolbox? (2009) 0.01
    0.0121149 = product of:
      0.0726894 = sum of:
        0.0726894 = weight(_text_:ranking in 2929) [ClassicSimilarity], result of:
          0.0726894 = score(doc=2929,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.35857132 = fieldWeight in 2929, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.046875 = fieldNorm(doc=2929)
      0.16666667 = coord(1/6)
    
    Abstract
    The launching of Scopus and Google Scholar, and methodological developments in social-network analysis have made many more indicators for evaluating journals available than the traditional impact factor, cited half-life, and immediacy index of the ISI. In this study, these new indicators are compared with one another and with the older ones. Do the various indicators measure new dimensions of the citation networks, or are they highly correlated among themselves? Are they robust and relatively stable over time? Two main dimensions are distinguished - size and impact - which together shape influence. The h-index combines the two dimensions and can also be considered as an indicator of reach (like Indegree). PageRank is mainly an indicator of size, but has important interactions with centrality measures. The Scimago Journal Ranking (SJR) indicator provides an alternative to the journal impact factor, but the computation is less easy.
  3. Meho, L.I.; Spurgin, K.M.: Ranking the research productivity of library and information science faculty and schools : an evaluation of data sources and research methods (2005) 0.01
    0.011422038 = product of:
      0.06853223 = sum of:
        0.06853223 = weight(_text_:ranking in 4343) [ClassicSimilarity], result of:
          0.06853223 = score(doc=4343,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.33806428 = fieldWeight in 4343, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=4343)
      0.16666667 = coord(1/6)
    
    Abstract
    This study evaluates the data sources and research methods used in earlier studies to rank the research productivity of Library and Information Science (LIS) faculty and schools. In doing so, the study identifies both tools and methods that generate more accurate publication count rankings as weil as databases that should be taken into consideration when conducting comprehensive searches in the literature for research and curricular needs. With a list of 2,625 items published between 1982 and 2002 by 68 faculty members of 18 American Library Association- (ALA-) accredited LIS schools, hundreds of databases were searched. Results show that there are only 10 databases that provide significant coverage of the LIS indexed literature. Results also show that restricting the data sources to one, two, or even three databases leads to inaccurate rankings and erroneous conclusions. Because no database provides comprehensive coverage of the LIS literature, researchers must rely an a wide range of disciplinary and multidisciplinary databases for ranking and other research purposes. The study answers such questions as the following: Is the Association of Library and Information Science Education's (ALISE's) directory of members a reliable tool to identify a complete list of faculty members at LIS schools? How many and which databases are needed in a multifile search to arrive at accurate publication count rankings? What coverage will be achieved using a certain number of databases? Which research areas are well covered by which databases? What alternative methods and tools are available to supplement gaps among databases? Did coverage performance of databases change over time? What counting method should be used when determining what and how many items each LIS faculty and school has published? The authors recommend advanced analysis of research productivity to provide a more detailed assessment of research productivity of authors and programs.
  4. Bonitz, M.: Ranking of nations and heightened competition in Matthew core journals : two faces of the Matthew effect for countries (2002) 0.01
    0.011422038 = product of:
      0.06853223 = sum of:
        0.06853223 = weight(_text_:ranking in 818) [ClassicSimilarity], result of:
          0.06853223 = score(doc=818,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.33806428 = fieldWeight in 818, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=818)
      0.16666667 = coord(1/6)
    
    Abstract
    The Matthew effect for countries (MEC) consists of the systematic deviation in the number of actual (observed) citations from the number of expected citations: A few countries, expecting a high impact (i.e., a high number of cites per paper) receive a surplus of citations, while the majority of countries, expecting a lower impact, lose citations. The MEC is characterized by numerous facets, but two are the most impressive. The first is the possibility of ranking the science nations by their overall efficiency of scientific performance, thus making the MEC attractive for science policy. The second is the concentration of the MEC in a small number of scientific journals which happen to be the most competitive markets for scientific papers and, therefore, are of interest to librarians as well as scientists. First, by using an appropriate measure for the above-mentioned deviation of the observed from the expected citation rate one can bring the countries under investigation into a rank order, which is almost stable over time and independent of the main scientific fields and the size (i.e., publication output) of the participating countries. Metaphorically speaking, this country rank distribution shows the extent to which a country is using its scientific talents. This is the first facet of the MEC. The second facet appears when one studies the mechanism (i.e., microstructure) of the MEC. Every journal contributes to the MEC. The "atoms" of the MEC are redistributed citations, whose number turns out to be a new and sensitive indicator for any scientific journal. Bringing the journals into a rank order according to this indicator, one finds that only 144 journals out of 2,712 contain half of all redistributed citations, and thus account for half of the MEC. We give a list of these "Matthew core journals" (MCJ) together with a new typology relating the new indicator to the well-known ones, such as publication or citation numbers. It is our hypothesis that the MCJ are forums of the fiercest competition in science--the "Olympic games in science" proceed in this highest class of scientific journals.
  5. Haridasan, S.; Kulshrestha, V.K.: Citation analysis of scholarly communication in the journal Knowledge Organization (2007) 0.01
    0.011422038 = product of:
      0.06853223 = sum of:
        0.06853223 = weight(_text_:ranking in 863) [ClassicSimilarity], result of:
          0.06853223 = score(doc=863,freq=4.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.33806428 = fieldWeight in 863, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03125 = fieldNorm(doc=863)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - Citation analysis is one of the popular methods employed for identification of core documents and complex relationship between citing and cited documents for a particular scholarly community in a geographical proximity. The present citation study is to understand the information needs, use pattern and use behaviour of library and information science researchers particularly engaged in the field of knowledge organization. Design/methodology/approach - The data relating to all the references appended to the articles during the period under study were collected and tabulated. Findings - Citation analysis of the journal for the period under study reveals that the average number of citations is around 21 per article. The major source of information is books and documents published during the later half of the century (1982-91). Authors from the USA, UK and Germany are the major contributors to the journal. India is ranked seventh in terms of contributions. Research limitations/implications - The study undertaken is limited to nine years, i.e. 1993-2001. The model citation index of the journal is analyzed using the first seven core authors. Practical implications - Ranking of periodicals helps to identify the core periodicals cited in the journal Knowledge Organization. Ranking of authors is done to know the eminent personalities in the subject, whose work is used by the authors to refine their ideas on the subject or topic. Originality/value - Model Citation Index for the first seven most cited authors was worked out and it reveals the historical relationship of cited and citing documents. This model citation index can be used to identify, the most cited authors as researchers currently working on special problems, to determine whether a paper has been cited, whether there has been a review of a subject, whether a concept has been applied, a theory confirmed or a method improved.
  6. Maier, S.: Wie Wissenschaftler berühmt werden : Anzahl der Veröffentlichungen zählt - "Google" ist das Maß aller Dinge (2004) 0.01
    0.011031911 = product of:
      0.066191465 = sum of:
        0.066191465 = weight(_text_:suchmaschine in 2901) [ClassicSimilarity], result of:
          0.066191465 = score(doc=2901,freq=2.0), product of:
            0.21191008 = queryWeight, product of:
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.03747799 = queryNorm
            0.31235638 = fieldWeight in 2901, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6542544 = idf(docFreq=420, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2901)
      0.16666667 = coord(1/6)
    
    Content
    "Eine Gruppe von Physikern der Clarkson-Universität in den Vereinigten Staaten ist der Frage nachgegangen, wie Wissenschaftler berühmt werden. Der Studie zu Folge ist der Bekanntheitsgrad eines Forschers, ausgedrückt durch die Anzahl der Treffer bei einer Suche in der Internet-Suchmaschine Google, der Anzahl der Veröffentlichungen des Forschers proportional. Die Studie ist auf dem arXiv-Server einsehbar (http://arxiv.org/abs/cond-mat/0404515). Daniel ben-Avraham und seine Kollegen recherchierten zunächst den Bekanntheitsgrad von 449 Forschern auf dem Gebiet der Festkörperphysik mit Hilfe des Internet-Suchdienstes Google. Den Bekanntheitsgrad eines Forschers definierten sie so durch die Anzahl der lnternetseiten, die seinen Namen erwähnen. Diese Zahl wurde dann mit der Anzahl der Veröffentlichungen des Forschers auf dem Los Alamos Server seit dem Jahre 1991 verglichen. Dabei stellte sich heraus, dass der Bekanntheitsgrad in einem linearen Zusammenhang mit der Zahl der Veröffentlichungen des jeweiligen Forschers steht. Für Filmstars und Sportler hingegen wächst die Berühmtheit exponentiell mit der Anzahl erfolgreicher Filme oder Medaillen an, so ben-Avraham. Diese unterschiedlichen Gesetzmäßigkeiten könnten damit zusammenhängen, dass der Erfolg eines Forschers in der Regel nur von dessen Fachkollegen wahrgenommen wird, nicht abervon einem breiteren Publikum. Die Verfasser der Studie hoffen, dass ihre Ergebnisse für Sozialwissenschaftler und Psychologen interessant sind - und dass auf diese Weise auch Physiker außerhalb ihres Fachgebiets berühmt werden können."
  7. Rousseau, R.; Zuccala, A.: ¬A classification of author co-citations : definitions and search strategies (2004) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 2266) [ClassicSimilarity], result of:
          0.0605745 = score(doc=2266,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 2266, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2266)
      0.16666667 = coord(1/6)
    
    Abstract
    The term author co-citation is defined and classified according to four distinct forms: the pure first-author co-citation, the pure author co-citation, the general author co-citation, and the special co-authorlco-citation. Each form can be used to obtain one count in an author co-citation study, based an a binary counting rule, which either recognizes the co-citedness of two authors in a given reference list (1) or does not (0). Most studies using author co-citations have relied solely an first-author cocitation counts as evidence of an author's oeuvre or body of work contributed to a research field. In this article, we argue that an author's contribution to a selected field of study should not be limited, but should be based an his/her complete list of publications, regardless of author ranking. We discuss the implications associated with using each co-citation form and show where simple first-author co-citations fit within our classification scheme. Examples are given to substantiate each author co-citation form defined in our classification, including a set of sample Dialog(TM) searches using references extracted from the SciSearch database.
  8. Jepsen, E.T.; Seiden, P.; Ingwersen, P.; Björneborn, L.; Borlund, P.: Characteristics of scientific Web publications : preliminary data gathering and analysis (2004) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 3091) [ClassicSimilarity], result of:
          0.0605745 = score(doc=3091,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 3091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3091)
      0.16666667 = coord(1/6)
    
    Abstract
    Because of the increasing presence of scientific publications an the Web, combined with the existing difficulties in easily verifying and retrieving these publications, research an techniques and methods for retrieval of scientific Web publications is called for. In this article, we report an the initial steps taken toward the construction of a test collection of scientific Web publications within the subject domain of plant biology. The steps reported are those of data gathering and data analysis aiming at identifying characteristics of scientific Web publications. The data used in this article were generated based an specifically selected domain topics that are searched for in three publicly accessible search engines (Google, AlITheWeb, and AItaVista). A sample of the retrieved hits was analyzed with regard to how various publication attributes correlated with the scientific quality of the content and whether this information could be employed to harvest, filter, and rank Web publications. The attributes analyzed were inlinks, outlinks, bibliographic references, file format, language, search engine overlap, structural position (according to site structure), and the occurrence of various types of metadata. As could be expected, the ranked output differs between the three search engines. Apparently, this is caused by differences in ranking algorithms rather than the databases themselves. In fact, because scientific Web content in this subject domain receives few inlinks, both AItaVista and AlITheWeb retrieved a higher degree of accessible scientific content than Google. Because of the search engine cutoffs of accessible URLs, the feasibility of using search engine output for Web content analysis is also discussed.
  9. Rousseau, R.: Journal evaluation : technical and practical issues (2002) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 816) [ClassicSimilarity], result of:
          0.0605745 = score(doc=816,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 816, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=816)
      0.16666667 = coord(1/6)
    
    Abstract
    This essay provides an overview of journal evaluation indicators. It highlights the strengths and weaknesses of different indicators, together with their range of applicability. The definition of a "quality journal," different notions of impact factors, the meaning of ranking journals, and possible biases in citation databases are also discussed. Attention is given to using the journal impact in evaluation studies. The quality of a journal is a multifaceted notion. Journals can be evaluated for different purposes, and hence the results of such evaluation exercises can be quite different depending on the indicator(s) used. The impact factor, in one of its versions, is probably the most used indicator when it comes to gauging the visibility of a journal on the research front. Generalized impact factors, over periods longer than the traditional two years, are better indicators for the long-term value of a journal. As with all evaluation studies, care must be exercised when considering journal impact factors as a quality indicator. It seems best to use a whole battery of indicators (including several impact factors) and to change this group of indicators depending on the purpose of the evaluation study. Nowadays it goes without saying that special attention is paid to e-journals and specific indicators for this type of journal.
  10. Meho, L.I.; Yang, K.: Impact of data sources on citation counts and rankings of LIS faculty : Web of science versus scopus and google scholar (2007) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 620) [ClassicSimilarity], result of:
          0.0605745 = score(doc=620,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 620, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=620)
      0.16666667 = coord(1/6)
    
    Abstract
    The Institute for Scientific Information's (ISI, now Thomson Scientific, Philadelphia, PA) citation databases have been used for decades as a starting point and often as the only tools for locating citations and/or conducting citation analyses. The ISI databases (or Web of Science [WoS]), however, may no longer be sufficient because new databases and tools that allow citation searching are now available. Using citations to the work of 25 library and information science (LIS) faculty members as a case study, the authors examine the effects of using Scopus and Google Scholar (GS) on the citation counts and rankings of scholars as measured by WoS. Overall, more than 10,000 citing and purportedly citing documents were examined. Results show that Scopus significantly alters the relative ranking of those scholars that appear in the middle of the rankings and that GS stands out in its coverage of conference proceedings as well as international, non-English language journals. The use of Scopus and GS, in addition to WoS, helps reveal a more accurate and comprehensive picture of the scholarly impact of authors. The WoS data took about 100 hours of collecting and processing time, Scopus consumed 200 hours, and GS a grueling 3,000 hours.
  11. Li, J.; Willett, P.: ArticleRank : a PageRank-based alternative to numbers of citations for analysing citation networks (2009) 0.01
    0.010095751 = product of:
      0.0605745 = sum of:
        0.0605745 = weight(_text_:ranking in 751) [ClassicSimilarity], result of:
          0.0605745 = score(doc=751,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.29880944 = fieldWeight in 751, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0390625 = fieldNorm(doc=751)
      0.16666667 = coord(1/6)
    
    Abstract
    Purpose - The purpose of this paper is to suggest an alternative to the widely used Times Cited criterion for analysing citation networks. The approach involves taking account of the natures of the papers that cite a given paper, so as to differentiate between papers that attract the same number of citations. Design/methodology/approach - ArticleRank is an algorithm that has been derived from Google's PageRank algorithm to measure the influence of journal articles. ArticleRank is applied to two datasets - a citation network based on an early paper on webometrics, and a self-citation network based on the 19 most cited papers in the Journal of Documentation - using citation data taken from the Web of Knowledge database. Findings - ArticleRank values provide a different ranking of a set of papers from that provided by the corresponding Times Cited values, and overcomes the inability of the latter to differentiate between papers with the same numbers of citations. The difference in rankings between Times Cited and ArticleRank is greatest for the most heavily cited articles in a dataset. Originality/value - This is a novel application of the PageRank algorithm.
  12. Burrell, Q.L.: Predicting future citation behavior (2003) 0.01
    0.007934572 = product of:
      0.047607433 = sum of:
        0.047607433 = product of:
          0.07141115 = sum of:
            0.035866898 = weight(_text_:29 in 3837) [ClassicSimilarity], result of:
              0.035866898 = score(doc=3837,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.27205724 = fieldWeight in 3837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3837)
            0.03554425 = weight(_text_:22 in 3837) [ClassicSimilarity], result of:
              0.03554425 = score(doc=3837,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.2708308 = fieldWeight in 3837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3837)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 3.2003 19:22:48
  13. Haycock, L.A.: Citation analysis of education dissertations for collection development (2004) 0.01
    0.0068010613 = product of:
      0.040806368 = sum of:
        0.040806368 = product of:
          0.061209552 = sum of:
            0.030743055 = weight(_text_:29 in 135) [ClassicSimilarity], result of:
              0.030743055 = score(doc=135,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23319192 = fieldWeight in 135, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=135)
            0.030466499 = weight(_text_:22 in 135) [ClassicSimilarity], result of:
              0.030466499 = score(doc=135,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.23214069 = fieldWeight in 135, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=135)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    10. 9.2000 17:38:22
    17.12.2006 19:44:29
  14. Adler, R.; Ewing, J.; Taylor, P.: Citation statistics : A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS) (2008) 0.01
    0.00605745 = product of:
      0.0363447 = sum of:
        0.0363447 = weight(_text_:ranking in 2417) [ClassicSimilarity], result of:
          0.0363447 = score(doc=2417,freq=2.0), product of:
            0.20271951 = queryWeight, product of:
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.03747799 = queryNorm
            0.17928566 = fieldWeight in 2417, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4090285 = idf(docFreq=537, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2417)
      0.16666667 = coord(1/6)
    
    Abstract
    Using citation data to assess research ultimately means using citation-based statistics to rank things.journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused. - For journals, the impact factor is most often used for ranking. This is a simple average derived from the distribution of citations for a collection of articles in the journal. The average captures only a small amount of information about that distribution, and it is a rather crude statistic. In addition, there are many confounding factors when judging journals by citations, and any comparison of journals requires caution when using impact factors. Using the impact factor alone to judge a journal is like using weight alone to judge a person's health. - For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. They believe that higher impact factors must mean higher citation counts. But this is often not the case! This is a pervasive misuse of statistics that needs to be challenged whenever and wherever it occurs. -For individual scientists, complete citation records can be difficult to compare. As a consequence, there have been attempts to find simple statistics that capture the full complexity of a scientist's citation record with a single number. The most notable of these is the h-index, which seems to be gaining in popularity. But even a casual inspection of the h-index and its variants shows that these are naive attempts to understand complicated citation records. While they capture a small amount of information about the distribution of a scientist's citations, they lose crucial information that is essential for the assessment of research.
  15. Nicolaisen, J.: Citation analysis (2007) 0.00
    0.0045135557 = product of:
      0.027081333 = sum of:
        0.027081333 = product of:
          0.081244 = sum of:
            0.081244 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.081244 = score(doc=6091,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    13. 7.2008 19:53:22
  16. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.00
    0.0039894576 = product of:
      0.023936745 = sum of:
        0.023936745 = product of:
          0.07181023 = sum of:
            0.07181023 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.07181023 = score(doc=3925,freq=4.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 7.2006 15:22:28
  17. Lewison, G.: ¬The work of the Bibliometrics Research Group (City University) and associates (2005) 0.00
    0.0033851666 = product of:
      0.020311 = sum of:
        0.020311 = product of:
          0.060932998 = sum of:
            0.060932998 = weight(_text_:22 in 4890) [ClassicSimilarity], result of:
              0.060932998 = score(doc=4890,freq=2.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.46428138 = fieldWeight in 4890, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4890)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    20. 1.2007 17:02:22
  18. Raan, A.F.J. van: Statistical properties of bibliometric indicators : research group indicator distributions and correlations (2006) 0.00
    0.0023936746 = product of:
      0.014362047 = sum of:
        0.014362047 = product of:
          0.04308614 = sum of:
            0.04308614 = weight(_text_:22 in 5275) [ClassicSimilarity], result of:
              0.04308614 = score(doc=5275,freq=4.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.32829654 = fieldWeight in 5275, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5275)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 7.2006 16:20:22
  19. Larivière, V.; Gingras, Y.; Archambault, E.: ¬The decline in the concentration of citations, 1900-2007 (2009) 0.00
    0.0023936746 = product of:
      0.014362047 = sum of:
        0.014362047 = product of:
          0.04308614 = sum of:
            0.04308614 = weight(_text_:22 in 2763) [ClassicSimilarity], result of:
              0.04308614 = score(doc=2763,freq=4.0), product of:
                0.13124153 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03747799 = queryNorm
                0.32829654 = fieldWeight in 2763, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2763)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    22. 3.2009 19:22:35
  20. Garfield, E.; Pudovkin, A.I.; Istomin, V.S.: Why do we need algorithmic historiography? (2003) 0.00
    0.0022772634 = product of:
      0.013663581 = sum of:
        0.013663581 = product of:
          0.04099074 = sum of:
            0.04099074 = weight(_text_:29 in 1606) [ClassicSimilarity], result of:
              0.04099074 = score(doc=1606,freq=2.0), product of:
                0.13183585 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03747799 = queryNorm
                0.31092256 = fieldWeight in 1606, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1606)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 3.2003 19:52:23

Authors

Languages

  • e 89
  • d 11

Types

  • a 96
  • r 3
  • el 2
  • x 1
  • More… Less…