Search (196 results, page 1 of 10)

  • × language_ss:"e"
  • × theme_ss:"Informetrie"
  • × year_i:[2010 TO 2020}
  1. Bensman, S.J.; Smolinsky, L.J.: Lotka's inverse square law of scientific productivity : its methods and statistics (2017) 0.01
    0.012852487 = product of:
      0.12852487 = sum of:
        0.12852487 = weight(_text_:log in 3698) [ClassicSimilarity], result of:
          0.12852487 = score(doc=3698,freq=4.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.7009429 = fieldWeight in 3698, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3698)
      0.1 = coord(1/10)
    
    Abstract
    This brief communication analyzes the statistics and methods Lotka used to derive his inverse square law of scientific productivity from the standpoint of modern theory. It finds that he violated the norms of this theory by extremely truncating his data on the right. It also proves that Lotka himself played an important role in establishing the commonly used method of identifying power-law behavior by the R2 fit to a regression line on a log-log plot that modern theory considers unreliable by basing the derivation of his law on this very method.
  2. Scientometrics pioneer Eugene Garfield dies : Eugene Garfield, founder of the Institute for Scientific Information and The Scientist, has passed away at age 91 (2017) 0.01
    0.008203137 = product of:
      0.041015685 = sum of:
        0.029231945 = weight(_text_:kommunikation in 3460) [ClassicSimilarity], result of:
          0.029231945 = score(doc=3460,freq=2.0), product of:
            0.14706601 = queryWeight, product of:
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.028611459 = queryNorm
            0.19876751 = fieldWeight in 3460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.140109 = idf(docFreq=703, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3460)
        0.011783739 = weight(_text_:web in 3460) [ClassicSimilarity], result of:
          0.011783739 = score(doc=3460,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.12619963 = fieldWeight in 3460, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3460)
      0.2 = coord(2/10)
    
    Content
    Vgl. auch Open Password, Nr.167 vom 01.03.2017 :"Eugene Garfield, Begründer und Pionier der Zitationsindexierung und der Ziationsanalyse, ohne den die Informationswissenschaft heute anders aussähe, ist im Alter von 91 Jahren gestorben. Er hinterlässt Frau, drei Söhne, eine Tochter, eine Stieftochter, zwei Enkeltöchter und zwei Großelternkinder. Garfield machte seinen ersten Abschluss als Bachelor in Chemie an der Columbia University in New York City im Jahre 1949. 1954 sattelte er einen Abschluss in Bibliothekswissenschaft drauf. 1961 sollte er im Fach strukturelle Linguistik promovieren. Als Chemie-Student war er nach eigenen Angaben weder besonders gut noch besonders glücklich. Sein "Erweckungserlebnis" hatte er auf einer Tagung der American Chemical Society, als er entdeckte, dass sich mit der Suche nach Literatur womöglich ein Lebensunterhalt bestreiten lasse. "So I went to the Chairman of the meeting and said: "How do you get a job in this racket?" Ab 1955 war Garfield zunächst als Berater für pharmazeutische Unternehmen tätig. Dort spezialisierte er sich auf Fachinformationen, indem er Inhalte relevanter Fachzeitschriften erarbeitete. 1955 schlug er in "Science" seine bahnbrechende Idee vor, Zitationen wissenschaftlicher Veröffentlichungen systematisch zu erfassen und Zusammenhänge zwischen Zitaten deutlich zu machen. 1960 gründete Garfield das Institute für Scientific Informationen, dessen CEO er bis 1992 blieb. 1964 brachte er den Scientific Information Index heraus. Weitere Maßgrößen wie der Social Science Index (ab 1973), der Arts and Humanities Citation Index (ab 1978) und der Journal Citation Index folgten. Diese Verzeichnisse wurden in dem "Web of Science" zusammengefasst und als Datenbank elektronisch zugänglich gemacht. Damit wurde es den Forschern ermöglich, die für sie relevante Literatur "at their fingertips" zu finden und sich in ihr zurechtzufinden. Darüber hinaus wurde es mit Hilfe der Rankings von Garfields Messgrößen möglich, die relative wissenschaftliche Bedeutung wissenschaftlicher Beiträge, Autoren, wissenschaftlicher Einrichtungen, Regionen und Länder zu messen.
    Garfield wandte sich im Zusammenhang mit seinen Messgrößen gegen "Bibliographic Negligence" und "Citation Amnesia", Er schrieb 2002: "There will never be a perfect solution to the problem of acknowledging intellectual debts. But a beginning can be made if journal editors will demand a signed pledge from authors that they have searched Medline, Science Citation Index, or other appropriate print and electronic databases." Er warnte aber auch vor einen unsachgemäßen Umgang mit seinen Messgößen und vor übertriebenen Erwartungen an sie in Zusammenhang mit Karriereentscheidungen über Wissenschaftler und Überlebensentscheidungen für wissenschaftliche Einrichtungen. 1982 übernahm die Thomson Corporation ISI für 210 Millionen Dollar. In der heutigen Nachfolgeorganisation Clarivate Analytics sind mehr als 4000 Mitarbeitern in über hundert Ländern beschäftigt. Garfield gründete auch eine Zeitung für Wissenschaftler, speziell für Biowissenschaftler, "The Scientist", die weiterbesteht und als kostenfreier Pushdienst bezogen werden kann. In seinen Beiträgen zur Wissenschaftspolitik kritisierte er beispielsweise die Wissenschaftsberater von Präsident Reagen 1986 als "Advocats of the administration´s science policies, rather than as objective conduits for communication between the president and the science community." Seinen Beitrag, mit dem er darum warb, die Förderung von UNESCO-Forschungsprogrammen fortzusetzen, gab er den Titel: "Let´s stand up für Global Science". Das ist auch in Trump-Zeiten ein guter Titel, da die US-Regierung den Wahrheitsbegriff, auf der Wissenschaft basiert, als bedeutungslos verwirft und sich auf Nationalismus und Abschottung statt auf internationale Kommunikation, Kooperation und gemeinsame Ausschöpfung von Interessen fokussiert."
  3. Milojevic, S.: Modes of collaboration in modern science : beyond power laws and preferential attachment (2010) 0.01
    0.007789783 = product of:
      0.07789783 = sum of:
        0.07789783 = weight(_text_:log in 3592) [ClassicSimilarity], result of:
          0.07789783 = score(doc=3592,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.42483553 = fieldWeight in 3592, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=3592)
      0.1 = coord(1/10)
    
    Abstract
    The goal of the study was to determine the underlying processes leading to the observed collaborator distribution in modern scientific fields, with special attention to nonpower-law behavior. Nanoscience is used as a case study of a modern interdisciplinary field and its coauthorship network for 2000-2004 period is constructed from the NanoBank database. We find three collaboration modes that correspond to three distinct ranges in the distribution of collaborators: (1) for authors with fewer than 20 collaborators (the majority) preferential attachment does not hold and they form a log-normal hook instead of a power law; (2) authors with more than 20 collaborators benefit from preferential attachment and form a power law tail; and (3) authors with between 250 and 800 collaborators are more frequent than expected because of the hyperauthorship practices in certain subfields.
  4. Kurtz, M.J.; Henneken, E.A.: Measuring metrics : a 40-year longitudinal cross-validation of citations, downloads, and peer review in astrophysics (2017) 0.01
    0.007789783 = product of:
      0.07789783 = sum of:
        0.07789783 = weight(_text_:log in 3430) [ClassicSimilarity], result of:
          0.07789783 = score(doc=3430,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.42483553 = fieldWeight in 3430, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.046875 = fieldNorm(doc=3430)
      0.1 = coord(1/10)
    
    Abstract
    Citation measures, and newer altmetric measures such as downloads are now commonly used to inform personnel decisions. How well do or can these measures measure or predict the past, current, or future scholarly performance of an individual? Using data from the Smithsonian/NASA Astrophysics Data System we analyze the publication, citation, download, and distinction histories of a cohort of 922 individuals who received a U.S. PhD in astronomy in the period 1972-1976. By examining the same and different measures at the same and different times for the same individuals we are able to show the capabilities and limitations of each measure. Because the distributions are lognormal, measurement uncertainties are multiplicative; we show that in order to state with 95% confidence that one person's citations and downloads are significantly higher than another person's, the log difference in the ratio of counts must be at least 0.3dex, which corresponds to a multiplicative factor of 2.
  5. Crespo, J.A.; Herranz, N.; Li, Y.; Ruiz-Castillo, J.: ¬The effect on citation inequality of differences in citation practices at the web of science subject category level (2014) 0.01
    0.0076588183 = product of:
      0.03829409 = sum of:
        0.029157192 = weight(_text_:web in 1291) [ClassicSimilarity], result of:
          0.029157192 = score(doc=1291,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.3122631 = fieldWeight in 1291, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1291)
        0.009136898 = product of:
          0.027410695 = sum of:
            0.027410695 = weight(_text_:22 in 1291) [ClassicSimilarity], result of:
              0.027410695 = score(doc=1291,freq=4.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27358043 = fieldWeight in 1291, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1291)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This article studies the impact of differences in citation practices at the subfield, or Web of Science subject category level, using the model introduced in Crespo, Li, and Ruiz-Castillo (2013a), according to which the number of citations received by an article depends on its underlying scientific influence and the field to which it belongs. We use the same Thomson Reuters data set of about 4.4 million articles used in Crespo et al. (2013a) to analyze 22 broad fields. The main results are the following: First, when the classification system goes from 22 fields to 219 subfields the effect on citation inequality of differences in citation practices increases from ?14% at the field level to 18% at the subfield level. Second, we estimate a set of exchange rates (ERs) over a wide [660, 978] citation quantile interval to express the citation counts of articles into the equivalent counts in the all-sciences case. In the fractional case, for example, we find that in 187 of 219 subfields the ERs are reliable in the sense that the coefficient of variation is smaller than or equal to 0.10. Third, in the fractional case the normalization of the raw data using the ERs (or subfield mean citations) as normalization factors reduces the importance of the differences in citation practices from 18% to 3.8% (3.4%) of overall citation inequality. Fourth, the results in the fractional case are essentially replicated when we adopt a multiplicative approach.
    Object
    Web of Science
  6. Zhu, Q.; Kong, X.; Hong, S.; Li, J.; He, Z.: Global ontology research progress : a bibliometric analysis (2015) 0.01
    0.0076588183 = product of:
      0.03829409 = sum of:
        0.029157192 = weight(_text_:web in 2590) [ClassicSimilarity], result of:
          0.029157192 = score(doc=2590,freq=6.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.3122631 = fieldWeight in 2590, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2590)
        0.009136898 = product of:
          0.027410695 = sum of:
            0.027410695 = weight(_text_:22 in 2590) [ClassicSimilarity], result of:
              0.027410695 = score(doc=2590,freq=4.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.27358043 = fieldWeight in 2590, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2590)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Purpose - The purpose of this paper is to analyse the global scientific outputs of ontology research, an important emerging discipline that has huge potential to improve information understanding, organization, and management. Design/methodology/approach - This study collected literature published during 1900-2012 from the Web of Science database. The bibliometric analysis was performed from authorial, institutional, national, spatiotemporal, and topical aspects. Basic statistical analysis, visualization of geographic distribution, co-word analysis, and a new index were applied to the selected data. Findings - Characteristics of publication outputs suggested that ontology research has entered into the soaring stage, along with increased participation and collaboration. The authors identified the leading authors, institutions, nations, and articles in ontology research. Authors were more from North America, Europe, and East Asia. The USA took the lead, while China grew fastest. Four major categories of frequently used keywords were identified: applications in Semantic Web, applications in bioinformatics, philosophy theories, and common supporting technology. Semantic Web research played a core role, and gene ontology study was well-developed. The study focus of ontology has shifted from philosophy to information science. Originality/value - This is the first study to quantify global research patterns and trends in ontology, which might provide a potential guide for the future research. The new index provides an alternative way to evaluate the multidisciplinary influence of researchers.
    Date
    20. 1.2015 18:30:22
    17. 9.2018 18:22:23
  7. Stuart, D.: Web metrics for library and information professionals (2014) 0.01
    0.007545275 = product of:
      0.07545275 = sum of:
        0.07545275 = weight(_text_:web in 2274) [ClassicSimilarity], result of:
          0.07545275 = score(doc=2274,freq=82.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.808072 = fieldWeight in 2274, product of:
              9.055386 = tf(freq=82.0), with freq of:
                82.0 = termFreq=82.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2274)
      0.1 = coord(1/10)
    
    Abstract
    This is a practical guide to using web metrics to measure impact and demonstrate value. The web provides an opportunity to collect a host of different metrics, from those associated with social media accounts and websites to more traditional research outputs. This book is a clear guide for library and information professionals as to what web metrics are available and how to assess and use them to make informed decisions and demonstrate value. As individuals and organizations increasingly use the web in addition to traditional publishing avenues and formats, this book provides the tools to unlock web metrics and evaluate the impact of this content. The key topics covered include: bibliometrics, webometrics and web metrics; data collection tools; evaluating impact on the web; evaluating social media impact; investigating relationships between actors; exploring traditional publications in a new environment; web metrics and the web of data; the future of web metrics and the library and information professional. The book will provide a practical introduction to web metrics for a wide range of library and information professionals, from the bibliometrician wanting to demonstrate the wider impact of a researcher's work than can be demonstrated through traditional citations databases, to the reference librarian wanting to measure how successfully they are engaging with their users on Twitter. It will be a valuable tool for anyone who wants to not only understand the impact of content, but demonstrate this impact to others within the organization and beyond.
    Content
    1. Introduction. MetricsIndicators -- Web metrics and Ranganathan's laws of library science -- Web metrics for the library and information professional -- The aim of this book -- The structure of the rest of this book -- 2. Bibliometrics, webometrics and web metrics. Web metrics -- Information science metrics -- Web analytics -- Relational and evaluative metrics -- Evaluative web metrics -- Relational web metrics -- Validating the results -- 3. Data collection tools. The anatomy of a URL, web links and the structure of the web -- Search engines 1.0 -- Web crawlers -- Search engines 2.0 -- Post search engine 2.0: fragmentation -- 4. Evaluating impact on the web. Websites -- Blogs -- Wikis -- Internal metrics -- External metrics -- A systematic approach to content analysis -- 5. Evaluating social media impact. Aspects of social network sites -- Typology of social network sites -- Research and tools for specific sites and services -- Other social network sites -- URL shorteners: web analytic links on any site -- General social media impact -- Sentiment analysis -- 6. Investigating relationships between actors. Social network analysis methods -- Sources for relational network analysis -- 7. Exploring traditional publications in a new environment. More bibliographic items -- Full text analysis -- Greater context -- 8. Web metrics and the web of data. The web of data -- Building the semantic web -- Implications of the web of data for web metrics -- Investigating the web of data today -- SPARQL -- Sindice -- LDSpider: an RDF web crawler -- 9. The future of web metrics and the library and information professional. How far we have come -- The future of web metrics -- The future of the library and information professional and web metrics.
    RSWK
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
    Subject
    Bibliothek / World Wide Web / World Wide Web 2.0 / Analyse / Statistik
    Bibliometrie / Semantic Web / Soziale Software
  8. Mingers, J.; Macri, F.; Petrovici, D.: Using the h-index to measure the quality of journals in the field of business and management (2012) 0.01
    0.007278278 = product of:
      0.03639139 = sum of:
        0.028568096 = weight(_text_:web in 2741) [ClassicSimilarity], result of:
          0.028568096 = score(doc=2741,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.3059541 = fieldWeight in 2741, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2741)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 2741) [ClassicSimilarity], result of:
              0.023469873 = score(doc=2741,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 2741, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2741)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This paper considers the use of the h-index as a measure of a journal's research quality and contribution. We study a sample of 455 journals in business and management all of which are included in the ISI Web of Science (WoS) and the Association of Business School's peer review journal ranking list. The h-index is compared with both the traditional impact factors, and with the peer review judgements. We also consider two sources of citation data - the WoS itself and Google Scholar. The conclusions are that the h-index is preferable to the impact factor for a variety of reasons, especially the selective coverage of the impact factor and the fact that it disadvantages journals that publish many papers. Google Scholar is also preferred to WoS as a data source. However, the paper notes that it is not sufficient to use any single metric to properly evaluate research achievements.
    Date
    29. 1.2016 19:00:16
    Object
    Web of Science
  9. Yang, S.; Han, R.; Ding, J.; Song, Y.: ¬The distribution of Web citations (2012) 0.01
    0.0069977264 = product of:
      0.06997726 = sum of:
        0.06997726 = weight(_text_:web in 2735) [ClassicSimilarity], result of:
          0.06997726 = score(doc=2735,freq=24.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.7494315 = fieldWeight in 2735, product of:
              4.8989797 = tf(freq=24.0), with freq of:
                24.0 = termFreq=24.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2735)
      0.1 = coord(1/10)
    
    Abstract
    A substantial amount of research has focused on the persistence or availability of Web citations. The present study analyzes Web citation distributions. Web citations are defined as the mentions of the URLs of Web pages (Web resources) as references in academic papers. The present paper primarily focuses on the analysis of the URLs of Web citations and uses three sets of data, namely, Set 1 from the Humanities and Social Science Index in China (CSSCI, 1998-2009), Set 2 from the publications of two international computer science societies, Communications of the ACM and IEEE Computer (1995-1999), and Set 3 from the medical science database, MEDLINE, of the National Library of Medicine (1994-2006). Web citation distributions are investigated based on Web site types, Web page types, URL frequencies, URL depths, URL lengths, and year of article publication. Results show significant differences in the Web citation distributions among the three data sets. However, when the URLs of Web citations with the same hostnames are aggregated, the distributions in the three data sets are consistent with the power law (the Lotka function).
  10. Ding, Y.: Applying weighted PageRank to author citation networks (2011) 0.01
    0.0065225097 = product of:
      0.032612547 = sum of:
        0.023567477 = weight(_text_:web in 4188) [ClassicSimilarity], result of:
          0.023567477 = score(doc=4188,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25239927 = fieldWeight in 4188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4188)
        0.009045068 = product of:
          0.027135205 = sum of:
            0.027135205 = weight(_text_:22 in 4188) [ClassicSimilarity], result of:
              0.027135205 = score(doc=4188,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.2708308 = fieldWeight in 4188, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4188)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This article aims to identify whether different weighted PageRank algorithms can be applied to author citation networks to measure the popularity and prestige of a scholar from a citation perspective. Information retrieval (IR) was selected as a test field and data from 1956-2008 were collected from Web of Science. Weighted PageRank with citation and publication as weighted vectors were calculated on author citation networks. The results indicate that both popularity rank and prestige rank were highly correlated with the weighted PageRank. Principal component analysis was conducted to detect relationships among these different measures. For capturing prize winners within the IR field, prestige rank outperformed all the other measures
    Date
    22. 1.2011 13:02:21
  11. Vinkler, P.: Application of the distribution of citations among publications in scientometric evaluations (2011) 0.01
    0.006491486 = product of:
      0.06491486 = sum of:
        0.06491486 = weight(_text_:log in 4769) [ClassicSimilarity], result of:
          0.06491486 = score(doc=4769,freq=2.0), product of:
            0.18335998 = queryWeight, product of:
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.028611459 = queryNorm
            0.3540296 = fieldWeight in 4769, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.4086204 = idf(docFreq=197, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4769)
      0.1 = coord(1/10)
    
    Abstract
    The ?-indicator (or ?v-indicator) of a set of journal papers is equal to a hundredth of the total number of citations obtained by the elite set of publications. The number of publications in the elite set P(?) is calculated as the square root of total papers. For greater sets the following equation is used: P(?v) = (10 log P) - 10, where P is the total number of publications. For sets comprising a single or several extreme frequently cited paper, the ?-index may be distorted. Therefore, a new indicator based on the distribution of citations is suggested. Accordingly, the publications are classified into citation categories, of which lower limits are given as 0, and (2n + 1), whereas the upper limits as 2n (n = 0, 2, 3, etc.). The citations distribution score (CDS) index is defined as the sum of weighted numbers of publications in the individual categories. The CDS-index increases logarithmically with the increasing number of citations. The citation distribution rate indicator is introduced by relating the actual CDS-index to the possible maximum. Several size-dependent and size-independent indicators were calculated. It has been concluded that relevant, already accepted scientometric indicators may validate novel indices through resulting in similar conclusions ("converging validation of indicators").
  12. Ho, Y.-S.; Kahn, M.: ¬A bibliometric study of highly cited reviews in the Science Citation Index expanded(TM) (2014) 0.01
    0.0060652317 = product of:
      0.030326158 = sum of:
        0.023806747 = weight(_text_:web in 1203) [ClassicSimilarity], result of:
          0.023806747 = score(doc=1203,freq=4.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.25496176 = fieldWeight in 1203, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1203)
        0.00651941 = product of:
          0.019558229 = sum of:
            0.019558229 = weight(_text_:29 in 1203) [ClassicSimilarity], result of:
              0.019558229 = score(doc=1203,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.19432661 = fieldWeight in 1203, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1203)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    Some 1,857 highly cited reviews, namely those cited at least 1,000 times since publication to 2011, were identified using the data hosted on the Science Citation Index ExpandedT database (Thomson Reuters, New York, NY) between 1899 and 2011. The data are disaggregated by publication date, citation counts, journals, Web of Science® (Thomson Reuters) subject areas, citation life cycles, and publications by Nobel Prize winners. Six indicators, total publications, independent publications, collaborative publications, first-author publications, corresponding-author publications, and single-author publications were applied to evaluate publication of institutions and countries. Among the highly cited reviews, 33% were single-author, 61% were single-institution, and 83% were single-country reviews. The United States ranked top for all 6 indicators. The G7 (United States, United Kingdom, Germany, Canada, France, Japan, and Italy) countries were the site of almost all the highly cited reviews. The top 12 most productive institutions were all located in the United States with Harvard University (Cambridge, MA) the leader. The top 3 most productive journals were Chemical Reviews, Nature, and the Annual Review of Biochemistry. In addition, the impact of the reviews was analyzed by total citations from publication to 2011, citations in 2011, and citation in publication year.
    Date
    29. 1.2014 16:42:48
    Object
    Web of Science
  13. Wainer, J; Przibisczki de Oliveira, H.; Anido, R.: Patterns of bibliographic references in the ACM published papers (2011) 0.01
    0.005604797 = product of:
      0.028023984 = sum of:
        0.020200694 = weight(_text_:web in 4240) [ClassicSimilarity], result of:
          0.020200694 = score(doc=4240,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 4240, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4240)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 4240) [ClassicSimilarity], result of:
              0.023469873 = score(doc=4240,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 4240, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4240)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This paper analyzes the bibliographic references made by all papers published by ACM in 2006. Both an automatic classification of all references and a human classification of a random sample of them resulted that around 40% of the references are to conference proceedings papers, around 30% are to journal papers, and around 8% are to books. Among the other types of documents, standards and RFC correspond to 3% of the references, technical and other reports correspond to 4%, and other Web references to 3%. Among the documents cited at least 10 times by the 2006 ACM papers, 41% are conferences papers, 37% are books, and 16% are journal papers.
    Date
    23. 1.2011 17:10:29
  14. Fiala, D.: Bibliometric analysis of CiteSeer data for countries (2012) 0.01
    0.005604797 = product of:
      0.028023984 = sum of:
        0.020200694 = weight(_text_:web in 2742) [ClassicSimilarity], result of:
          0.020200694 = score(doc=2742,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 2742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2742)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 2742) [ClassicSimilarity], result of:
              0.023469873 = score(doc=2742,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 2742, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2742)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This article describes the results of our analysis of the data from the CiteSeer digital library. First, we examined the data from the point of view of source top-level Internet domains from which the data were collected. Second, we measured country shares in publications indexed by CiteSeer and compared them to those based on mainstream bibliographic data from the Web of Science and Scopus. And third, we concentrated on analyzing publications and their citations aggregated by countries. This way, we generated rankings of the most influential countries in computer science using several non-recursive as well as recursive methods such as citation counts or PageRank. We conclude that even if East Asian countries are underrepresented in CiteSeer, its data may well be used along with other conventional bibliographic databases for comparing the computer science research productivity and performance of countries.
    Date
    29. 1.2016 18:36:47
  15. Ronda-Pupo, G.A.; Katz, J.S.: ¬The power-law relationship between citation-based performance and collaboration in articles in management journals : a scale-independent approach scale-independent approach (2016) 0.01
    0.005604797 = product of:
      0.028023984 = sum of:
        0.020200694 = weight(_text_:web in 3127) [ClassicSimilarity], result of:
          0.020200694 = score(doc=3127,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 3127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3127)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 3127) [ClassicSimilarity], result of:
              0.023469873 = score(doc=3127,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 3127, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3127)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    The objective of this article is to determine if academic collaboration is associated with the citation-based performance of articles that are published in management journals. We analyzed 127,812 articles published between 1988 and 2013 in 173 journals on the ISI Web of Science in the "management" category. Collaboration occurred in approximately 60% of all articles. A power-law relationship was found between citation-based performance and journal size and collaboration patterns. The number of citations expected by collaborative articles increases 21.89 or 3.7 times when the number of collaborative articles published in a journal doubles. The number of citations expected by noncollaborative articles only increases 21.35 or 2.55 times if a journal publishes double the number of noncollaborative articles. The Matthew effect is stronger for collaborative than for noncollaborative articles. Scale-independent indicators increase the confidence in the evaluation of the impact of the articles published in management journals.
    Date
    20. 9.2016 21:29:27
  16. Zhao, R.; Wei, M.; Quan, W.: Evolution of think tanks studies in view of a scientometrics perspective (2017) 0.01
    0.005604797 = product of:
      0.028023984 = sum of:
        0.020200694 = weight(_text_:web in 3843) [ClassicSimilarity], result of:
          0.020200694 = score(doc=3843,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 3843, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3843)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 3843) [ClassicSimilarity], result of:
              0.023469873 = score(doc=3843,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 3843, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3843)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    The paper presents a scientometrics analysis of research work done on the emerging area of think tanks, which are regarded as a domain of information science. Research on think tanks started during the last century and in recent years has gained tremendous momentum. It is considered one of the most important emerging domains of research in information science. We have analyzed the research output data on think tanks during 2006-2016 indexed in the Web of KnowledgeT and Scopus®. Our study objectively explores the document co-citation clusters of 1,450 bibliographic records to identify the origin of think tanks and hot research specialties of the domain. CiteSpace was used to visualize the perspective of the think tanks domain. Pivotal articles, prominent authors, active disciplines and institutions have been identified by network analysis. This article describes the latest development of a generic approach to detect and visualize emerging trends and transient patterns in think tanks.
    Date
    29. 9.2017 18:46:06
  17. Leeuwen, T.N. van; Tatum, C.; Wouters, P.F: Exploring possibilities to use bibliometric data to monitor gold open access publishing at the national level (2018) 0.01
    0.005604797 = product of:
      0.028023984 = sum of:
        0.020200694 = weight(_text_:web in 4458) [ClassicSimilarity], result of:
          0.020200694 = score(doc=4458,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 4458, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4458)
        0.007823291 = product of:
          0.023469873 = sum of:
            0.023469873 = weight(_text_:29 in 4458) [ClassicSimilarity], result of:
              0.023469873 = score(doc=4458,freq=2.0), product of:
                0.10064617 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23319192 = fieldWeight in 4458, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4458)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    This article1 describes the possibilities to analyze open access (OA) publishing in the Netherlands in an international comparative way. OA publishing is now actively stimulated by Dutch science policy, similar to the United Kingdom. We conducted a bibliometric baseline measurement to assess the current situation, to be able to measure developments over time. We collected data from various sources, and for three different smaller European countries (the Netherlands, Denmark, and Switzerland). Not all of the analyses for this baseline measurement are included here. The analysis presented in this article focuses on the various ways OA can be defined using the Web of Science, limiting the analysis mainly to Gold OA. From the data we collected we can conclude that the way OA is currently registered in various electronic bibliographic databases is quite unclear, and various methods applied deliver results that are different, although the impact scores derived from the data point in the same direction.
    Date
    29. 9.2018 12:43:48
  18. Li, J.; Shi, D.: Sleeping beauties in genius work : when were they awakened? (2016) 0.01
    0.005590722 = product of:
      0.02795361 = sum of:
        0.020200694 = weight(_text_:web in 2647) [ClassicSimilarity], result of:
          0.020200694 = score(doc=2647,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 2647, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2647)
        0.0077529154 = product of:
          0.023258746 = sum of:
            0.023258746 = weight(_text_:22 in 2647) [ClassicSimilarity], result of:
              0.023258746 = score(doc=2647,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23214069 = fieldWeight in 2647, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2647)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    "Genius work," proposed by Avramescu, refers to scientific articles whose citations grow exponentially in an extended period, for example, over 50 years. Such articles were defined as "sleeping beauties" by van Raan, who quantitatively studied the phenomenon of delayed recognition. However, the criteria adopted by van Raan at times are not applicable and may confer recognition prematurely. To revise such deficiencies, this paper proposes two new criteria, which are applicable (but not limited) to exponential citation curves. We searched for genius work among articles of Nobel Prize laureates during the period of 1901-2012 on the Web of Science, finding 25 articles of genius work out of 21,438 papers including 10 (by van Raan's criteria) sleeping beauties and 15 nonsleeping-beauties. By our new criteria, two findings were obtained through empirical analysis: (a) the awakening periods for genius work depend on the increase rate b in the exponential function, and (b) lower b leads to a longer sleeping period.
    Date
    22. 1.2016 14:13:32
  19. Ridenour, L.: Boundary objects : measuring gaps and overlap between research areas (2016) 0.01
    0.005590722 = product of:
      0.02795361 = sum of:
        0.020200694 = weight(_text_:web in 2835) [ClassicSimilarity], result of:
          0.020200694 = score(doc=2835,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 2835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2835)
        0.0077529154 = product of:
          0.023258746 = sum of:
            0.023258746 = weight(_text_:22 in 2835) [ClassicSimilarity], result of:
              0.023258746 = score(doc=2835,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23214069 = fieldWeight in 2835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2835)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    The aim of this paper is to develop methodology to determine conceptual overlap between research areas. It investigates patterns of terminology usage in scientific abstracts as boundary objects between research specialties. Research specialties were determined by high-level classifications assigned by Thomson Reuters in their Essential Science Indicators file, which provided a strictly hierarchical classification of journals into 22 categories. Results from the query "network theory" were downloaded from the Web of Science. From this file, two top-level groups, economics and social sciences, were selected and topically analyzed to provide a baseline of similarity on which to run an informetric analysis. The Places & Spaces Map of Science (Klavans and Boyack 2007) was used to determine the proximity of disciplines to one another in order to select the two disciplines use in the analysis. Groups analyzed share common theories and goals; however, groups used different language to describe their research. It was found that 61% of term words were shared between the two groups.
  20. Leydesdorff, L.; Bornmann, L.; Wagner, C.S.: ¬The relative influences of government funding and international collaboration on citation impact (2019) 0.01
    0.005590722 = product of:
      0.02795361 = sum of:
        0.020200694 = weight(_text_:web in 4681) [ClassicSimilarity], result of:
          0.020200694 = score(doc=4681,freq=2.0), product of:
            0.0933738 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.028611459 = queryNorm
            0.21634221 = fieldWeight in 4681, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=4681)
        0.0077529154 = product of:
          0.023258746 = sum of:
            0.023258746 = weight(_text_:22 in 4681) [ClassicSimilarity], result of:
              0.023258746 = score(doc=4681,freq=2.0), product of:
                0.10019246 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.028611459 = queryNorm
                0.23214069 = fieldWeight in 4681, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4681)
          0.33333334 = coord(1/3)
      0.2 = coord(2/10)
    
    Abstract
    A recent publication in Nature reports that public R&D funding is only weakly correlated with the citation impact of a nation's articles as measured by the field-weighted citation index (FWCI; defined by Scopus). On the basis of the supplementary data, we up-scaled the design using Web of Science data for the decade 2003-2013 and OECD funding data for the corresponding decade assuming a 2-year delay (2001-2011). Using negative binomial regression analysis, we found very small coefficients, but the effects of international collaboration are positive and statistically significant, whereas the effects of government funding are negative, an order of magnitude smaller, and statistically nonsignificant (in two of three analyses). In other words, international collaboration improves the impact of research articles, whereas more government funding tends to have a small adverse effect when comparing OECD countries.
    Date
    8. 1.2019 18:22:45

Types

  • a 191
  • el 3
  • m 3
  • s 1
  • More… Less…