Search (17 results, page 1 of 1)

  • × theme_ss:"Informetrie"
  • × type_ss:"el"
  1. Calculating the h-index : Web of Science, Scopus or Google Scholar? (2011) 0.10
    0.09906621 = product of:
      0.14859931 = sum of:
        0.055339385 = weight(_text_:science in 854) [ClassicSimilarity], result of:
          0.055339385 = score(doc=854,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.41158113 = fieldWeight in 854, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.078125 = fieldNorm(doc=854)
        0.09325992 = product of:
          0.18651985 = sum of:
            0.18651985 = weight(_text_:index in 854) [ClassicSimilarity], result of:
              0.18651985 = score(doc=854,freq=6.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.836226 = fieldWeight in 854, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.078125 = fieldNorm(doc=854)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Gegenüberstellung der Berechnung des h-Index in den drei Tools mit Beispiel Stephen Hawking (WoS: 59, Scopus: 19, Google Scholar: 76)
    Object
    h-index
    Web of Science
  2. Harzing, A.-W.: Comparing the Google Scholar h-index with the ISI Journal Impact Factor (2008) 0.07
    0.06851512 = product of:
      0.10277268 = sum of:
        0.027391598 = weight(_text_:science in 855) [ClassicSimilarity], result of:
          0.027391598 = score(doc=855,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20372227 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=855)
        0.075381085 = product of:
          0.15076217 = sum of:
            0.15076217 = weight(_text_:index in 855) [ClassicSimilarity], result of:
              0.15076217 = score(doc=855,freq=8.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.67591333 = fieldWeight in 855, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=855)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Publication in academic journals is a key criterion for appointment, tenure and promotion in universities. Many universities weigh publications according to the quality or impact of the journal. Traditionally, journal quality has been assessed through the ISI Journal Impact Factor (JIF). This paper proposes an alternative metric - Hirsch's h-index - and data source - Google Scholar - to assess journal impact. Using a systematic comparison between the Google Scholar h-index and the ISI JIF for a sample of 838 journals in Economics & Business, we argue that the former provides a more accurate and comprehensive measure of journal impact.
    Object
    h-index
    Web of Science
  3. Scientometrics pioneer Eugene Garfield dies : Eugene Garfield, founder of the Institute for Scientific Information and The Scientist, has passed away at age 91 (2017) 0.05
    0.052249998 = product of:
      0.078375 = sum of:
        0.03623568 = weight(_text_:science in 3460) [ClassicSimilarity], result of:
          0.03623568 = score(doc=3460,freq=14.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.26949924 = fieldWeight in 3460, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3460)
        0.042139314 = product of:
          0.08427863 = sum of:
            0.08427863 = weight(_text_:index in 3460) [ClassicSimilarity], result of:
              0.08427863 = score(doc=3460,freq=10.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.37784708 = fieldWeight in 3460, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3460)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl. auch Open Password, Nr.167 vom 01.03.2017 :"Eugene Garfield, Begründer und Pionier der Zitationsindexierung und der Ziationsanalyse, ohne den die Informationswissenschaft heute anders aussähe, ist im Alter von 91 Jahren gestorben. Er hinterlässt Frau, drei Söhne, eine Tochter, eine Stieftochter, zwei Enkeltöchter und zwei Großelternkinder. Garfield machte seinen ersten Abschluss als Bachelor in Chemie an der Columbia University in New York City im Jahre 1949. 1954 sattelte er einen Abschluss in Bibliothekswissenschaft drauf. 1961 sollte er im Fach strukturelle Linguistik promovieren. Als Chemie-Student war er nach eigenen Angaben weder besonders gut noch besonders glücklich. Sein "Erweckungserlebnis" hatte er auf einer Tagung der American Chemical Society, als er entdeckte, dass sich mit der Suche nach Literatur womöglich ein Lebensunterhalt bestreiten lasse. "So I went to the Chairman of the meeting and said: "How do you get a job in this racket?" Ab 1955 war Garfield zunächst als Berater für pharmazeutische Unternehmen tätig. Dort spezialisierte er sich auf Fachinformationen, indem er Inhalte relevanter Fachzeitschriften erarbeitete. 1955 schlug er in "Science" seine bahnbrechende Idee vor, Zitationen wissenschaftlicher Veröffentlichungen systematisch zu erfassen und Zusammenhänge zwischen Zitaten deutlich zu machen. 1960 gründete Garfield das Institute für Scientific Informationen, dessen CEO er bis 1992 blieb. 1964 brachte er den Scientific Information Index heraus. Weitere Maßgrößen wie der Social Science Index (ab 1973), der Arts and Humanities Citation Index (ab 1978) und der Journal Citation Index folgten. Diese Verzeichnisse wurden in dem "Web of Science" zusammengefasst und als Datenbank elektronisch zugänglich gemacht. Damit wurde es den Forschern ermöglich, die für sie relevante Literatur "at their fingertips" zu finden und sich in ihr zurechtzufinden. Darüber hinaus wurde es mit Hilfe der Rankings von Garfields Messgrößen möglich, die relative wissenschaftliche Bedeutung wissenschaftlicher Beiträge, Autoren, wissenschaftlicher Einrichtungen, Regionen und Länder zu messen.
    Garfield wandte sich im Zusammenhang mit seinen Messgrößen gegen "Bibliographic Negligence" und "Citation Amnesia", Er schrieb 2002: "There will never be a perfect solution to the problem of acknowledging intellectual debts. But a beginning can be made if journal editors will demand a signed pledge from authors that they have searched Medline, Science Citation Index, or other appropriate print and electronic databases." Er warnte aber auch vor einen unsachgemäßen Umgang mit seinen Messgößen und vor übertriebenen Erwartungen an sie in Zusammenhang mit Karriereentscheidungen über Wissenschaftler und Überlebensentscheidungen für wissenschaftliche Einrichtungen. 1982 übernahm die Thomson Corporation ISI für 210 Millionen Dollar. In der heutigen Nachfolgeorganisation Clarivate Analytics sind mehr als 4000 Mitarbeitern in über hundert Ländern beschäftigt. Garfield gründete auch eine Zeitung für Wissenschaftler, speziell für Biowissenschaftler, "The Scientist", die weiterbesteht und als kostenfreier Pushdienst bezogen werden kann. In seinen Beiträgen zur Wissenschaftspolitik kritisierte er beispielsweise die Wissenschaftsberater von Präsident Reagen 1986 als "Advocats of the administration´s science policies, rather than as objective conduits for communication between the president and the science community." Seinen Beitrag, mit dem er darum warb, die Förderung von UNESCO-Forschungsprogrammen fortzusetzen, gab er den Titel: "Let´s stand up für Global Science". Das ist auch in Trump-Zeiten ein guter Titel, da die US-Regierung den Wahrheitsbegriff, auf der Wissenschaft basiert, als bedeutungslos verwirft und sich auf Nationalismus und Abschottung statt auf internationale Kommunikation, Kooperation und gemeinsame Ausschöpfung von Interessen fokussiert."
  4. Krattenthaler, C.: Was der h-Index wirklich aussagt (2021) 0.04
    0.037988503 = product of:
      0.113965504 = sum of:
        0.113965504 = product of:
          0.22793101 = sum of:
            0.22793101 = weight(_text_:index in 407) [ClassicSimilarity], result of:
              0.22793101 = score(doc=407,freq=14.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                1.021885 = fieldWeight in 407, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0625 = fieldNorm(doc=407)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Diese Note legt dar, dass der sogenannte h-Index (Hirschs bibliometrischer Index) im Wesentlichen dieselbe Information wiedergibt wie die Gesamtanzahl von Zitationen von Publikationen einer Autorin oder eines Autors, also ein nutzloser bibliometrischer Index ist. Dies basiert auf einem faszinierenden Satz der Wahrscheinlichkeitstheorie, der hier ebenfalls erläutert wird.
    Content
    Vgl.: DOI: 10.1515/dmvm-2021-0050. Auch abgedruckt u.d.T.: 'Der h-Index - "ein nutzloser bibliometrischer Index"' in Open Password Nr. 1007 vom 06.12.2021 unter: https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzM3NCwiZDI3MzMzOTEwMzUzIiwwLDAsMzQ4LDFd.
    Object
    h-index
  5. Metrics in research : for better or worse? (2016) 0.04
    0.035062876 = product of:
      0.05259431 = sum of:
        0.022135753 = weight(_text_:science in 3312) [ClassicSimilarity], result of:
          0.022135753 = score(doc=3312,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.16463245 = fieldWeight in 3312, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=3312)
        0.03045856 = product of:
          0.06091712 = sum of:
            0.06091712 = weight(_text_:index in 3312) [ClassicSimilarity], result of:
              0.06091712 = score(doc=3312,freq=4.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.27311024 = fieldWeight in 3312, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3312)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    If you are an academic researcher but did not earn (yet) your Nobel prize or your retirement, it is unlikely you never heard about research metrics. These metrics aim at quantifying various aspects of the research process, at the level of individual researchers (e.g. h-index, altmetrics), scientific journals (e.g. impact factors) or entire universities/ countries (e.g. rankings). Although such "measurements" have existed in a simple form for a long time, their widespread calculation was enabled by the advent of the digital era (large amount of data available worldwide in a computer-compatible format). And in this new era, what becomes technically possible will be done, and what is done and appears to simplify our lives will be used. As a result, a rapidly growing number of statistics-based numerical indices are nowadays fed into decisionmaking processes. This is true in nearly all aspects of society (politics, economy, education and private life), and in particular in research, where metrics play an increasingly important role in determining positions, funding, awards, research programs, career choices, reputations, etc.
    Content
    Inhalt: Metrics in Research - For better or worse? / Jozica Dolenc, Philippe Hünenberger Oliver Renn - A brief visual history of research metrics / Oliver Renn, Jozica Dolenc, Joachim Schnabl - Bibliometry: The wizard of O's / Philippe Hünenberger - The grip of bibliometrics - A student perspective / Matthias Tinzl - Honesty and transparency to taxpayers is the long-term fundament for stable university funding / Wendelin J. Stark - Beyond metrics: Managing the performance of your work / Charlie Rapple - Scientific profiling instead of bibliometrics: Key performance indicators of the future / Rafael Ball - More knowledge, less numbers / Carl Philipp Rosenau - Do we really need BIBLIO-metrics to evaluate individual researchers? / Rüdiger Mutz - Using research metrics responsibly and effectively as a researcher / Peter I. Darroch, Lisa H. Colledge - Metrics in research: More (valuable) questions than answers / Urs Hugentobler - Publication of research results: Use and abuse / Wilfred F. van Gunsteren - Wanted: Transparent algorithms, interpretation skills, common sense / Eva E. Wille - Impact factors, the h-index, and citation hype - Metrics in research from the point of view of a journal editor / Renato Zenobi - Rashomon or metrics in a publisher's world / Gabriella Karger - The impact factor and I: A love-hate relationship / Jean-Christophe Leroux - Personal experiences bringing altmetrics to the academic market / Ben McLeish - Fatally attracted by numbers? / Oliver Renn - On computable numbers / Gerd Folkers, Laura Folkers - ScienceMatters - Single observation science publishing and linking observations to create an internet of science / Lawrence Rajendran.
  6. Schreiber, M.: Restricting the h-index to a citation time window : a case study of a timed Hirsch index (2014) 0.03
    0.032106146 = product of:
      0.09631843 = sum of:
        0.09631843 = product of:
          0.19263686 = sum of:
            0.19263686 = weight(_text_:index in 1563) [ClassicSimilarity], result of:
              0.19263686 = score(doc=1563,freq=10.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.86365044 = fieldWeight in 1563, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The h-index has been shown to increase in many cases mostly because of citations to rather old publications. This inertia can be circumvented by restricting the evaluation to a citation time window. Here I report results of an empirical study analyzing the evolution of the thus defined timed h-index in dependence on the length of the citation time window.
    Object
    h-index
  7. Adler, R.; Ewing, J.; Taylor, P.: Citation statistics : A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS) (2008) 0.03
    0.029363623 = product of:
      0.044045433 = sum of:
        0.011739256 = weight(_text_:science in 2417) [ClassicSimilarity], result of:
          0.011739256 = score(doc=2417,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.08730954 = fieldWeight in 2417, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2417)
        0.03230618 = product of:
          0.06461236 = sum of:
            0.06461236 = weight(_text_:index in 2417) [ClassicSimilarity], result of:
              0.06461236 = score(doc=2417,freq=8.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.28967714 = fieldWeight in 2417, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=2417)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Using citation data to assess research ultimately means using citation-based statistics to rank things.journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused. - For journals, the impact factor is most often used for ranking. This is a simple average derived from the distribution of citations for a collection of articles in the journal. The average captures only a small amount of information about that distribution, and it is a rather crude statistic. In addition, there are many confounding factors when judging journals by citations, and any comparison of journals requires caution when using impact factors. Using the impact factor alone to judge a journal is like using weight alone to judge a person's health. - For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. They believe that higher impact factors must mean higher citation counts. But this is often not the case! This is a pervasive misuse of statistics that needs to be challenged whenever and wherever it occurs. -For individual scientists, complete citation records can be difficult to compare. As a consequence, there have been attempts to find simple statistics that capture the full complexity of a scientist's citation record with a single number. The most notable of these is the h-index, which seems to be gaining in popularity. But even a casual inspection of the h-index and its variants shows that these are naive attempts to understand complicated citation records. While they capture a small amount of information about the distribution of a scientist's citations, they lose crucial information that is essential for the assessment of research.
    The validity of statistics such as the impact factor and h-index is neither well understood nor well studied. The connection of these statistics with research quality is sometimes established on the basis of "experience." The justification for relying on them is that they are "readily available." The few studies of these statistics that were done focused narrowly on showing a correlation with some other measure of quality rather than on determining how one can best derive useful information from citation data. We do not dismiss citation statistics as a tool for assessing the quality of research.citation data and statistics can provide some valuable information. We recognize that assessment must be practical, and for this reason easily-derived citation statistics almost surely will be part of the process. But citation data provide only a limited and incomplete view of research quality, and the statistics derived from citation data are sometimes poorly understood and misused. Research is too important to measure its value with only a single coarse tool. We hope those involved in assessment will read both the commentary and the details of this report in order to understand not only the limitations of citation statistics but also how better to use them. If we set high standards for the conduct of science, surely we should set equally high standards for assessing its quality.
    Object
    h-index
  8. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.016300548 = product of:
      0.048901644 = sum of:
        0.048901644 = product of:
          0.09780329 = sum of:
            0.09780329 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.09780329 = score(doc=3925,freq=4.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 7.2006 15:22:28
  9. Chawla, D.S.: Hundreds of 'predatory' journals indexed on leading scholarly database (2021) 0.01
    0.013043619 = product of:
      0.039130855 = sum of:
        0.039130855 = weight(_text_:science in 148) [ClassicSimilarity], result of:
          0.039130855 = score(doc=148,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.2910318 = fieldWeight in 148, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.078125 = fieldNorm(doc=148)
      0.33333334 = coord(1/3)
    
    Abstract
    Scopus has stopped adding content from most of the flagged titles, but the analysis highlights how poor-quality science is infiltrating literature.
  10. Schmitz, J.; Arning, U.; Peters, I.: handbuch.io : Handbuch CoScience / Messung von wissenschaftlichem Impact (2015) 0.01
    0.0125635145 = product of:
      0.037690543 = sum of:
        0.037690543 = product of:
          0.075381085 = sum of:
            0.075381085 = weight(_text_:index in 2189) [ClassicSimilarity], result of:
              0.075381085 = score(doc=2189,freq=2.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.33795667 = fieldWeight in 2189, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2189)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Object
    h-Index
  11. Abdelkareem, M.A.A.: In terms of publication index, what indicator is the best for researchers indexing, Google Scholar, Scopus, Clarivate or others? (2018) 0.01
    0.0125635145 = product of:
      0.037690543 = sum of:
        0.037690543 = product of:
          0.075381085 = sum of:
            0.075381085 = weight(_text_:index in 4548) [ClassicSimilarity], result of:
              0.075381085 = score(doc=4548,freq=2.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.33795667 = fieldWeight in 4548, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4548)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
  12. Positionspapier der DMV zur Verwendung bibliometrischer Daten (2020) 0.01
    0.0125635145 = product of:
      0.037690543 = sum of:
        0.037690543 = product of:
          0.075381085 = sum of:
            0.075381085 = weight(_text_:index in 5738) [ClassicSimilarity], result of:
              0.075381085 = score(doc=5738,freq=2.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.33795667 = fieldWeight in 5738, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5738)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Object
    h-index
  13. Bagrow, J.P.; Rozenfeld, H.D.; Bollt, E.M.; Ben-Avraham, D.: How famous is a scientist? : famous to those who know us (2004) 0.01
    0.009130533 = product of:
      0.027391598 = sum of:
        0.027391598 = weight(_text_:science in 2497) [ClassicSimilarity], result of:
          0.027391598 = score(doc=2497,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20372227 = fieldWeight in 2497, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2497)
      0.33333334 = coord(1/3)
    
    Abstract
    Following a recent idea, to measure fame by the number of \Google hits found in a search on the WWW, we study the relation between fame (\Google hits) and merit (number of papers posted on an electronic archive) for a random group of scientists in condensed matter and statistical physics. Our findings show that fame and merit in science are linearly related, and that the probability distribution for a certain level of fame falls off exponentially. This is in sharp contrast with the original findings about WW II ace pilots, for which fame is exponentially related to merit (number of downed planes), and the probability of fame decays in power-law fashion. Other groups in our study show similar patterns of fame as for ace pilots.
  14. Williams, B.: Dimensions & VOSViewer bibliometrics in the reference interview (2020) 0.01
    0.009130533 = product of:
      0.027391598 = sum of:
        0.027391598 = weight(_text_:science in 5719) [ClassicSimilarity], result of:
          0.027391598 = score(doc=5719,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20372227 = fieldWeight in 5719, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5719)
      0.33333334 = coord(1/3)
    
    Abstract
    The VOSviewer software provides easy access to bibliometric mapping using data from Dimensions, Scopus and Web of Science. The properly formatted and structured citation data, and the ease in which it can be exported open up new avenues for use during citation searches and eference interviews. This paper details specific techniques for using advanced searches in Dimensions, exporting the citation data, and drawing insights from the maps produced in VOS Viewer. These search techniques and data export practices are fast and accurate enough to build into reference interviews for graduate students, faculty, and post-PhD researchers. The search results derived from them are accurate and allow a more comprehensive view of citation networks embedded in ordinary complex boolean searches.
  15. Czaran, E.; Wolski, M.; Richardson, J.: Improving research impact through the use of media (2017) 0.01
    0.007826171 = product of:
      0.023478512 = sum of:
        0.023478512 = weight(_text_:science in 5057) [ClassicSimilarity], result of:
          0.023478512 = score(doc=5057,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 5057, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=5057)
      0.33333334 = coord(1/3)
    
    Source
    Open information science. 1(2017) no.1, S.41-55
  16. Tavakolizadeh-Ravari, M.: Analysis of the long term dynamics in thesaurus developments and its consequences (2017) 0.01
    0.0071791513 = product of:
      0.021537453 = sum of:
        0.021537453 = product of:
          0.043074906 = sum of:
            0.043074906 = weight(_text_:index in 3081) [ClassicSimilarity], result of:
              0.043074906 = score(doc=3081,freq=2.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.1931181 = fieldWeight in 3081, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3081)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Die Arbeit analysiert die dynamische Entwicklung und den Gebrauch von Thesaurusbegriffen. Zusätzlich konzentriert sie sich auf die Faktoren, die die Zahl von Indexbegriffen pro Dokument oder Zeitschrift beeinflussen. Als Untersuchungsobjekt dienten der MeSH und die entsprechende Datenbank "MEDLINE". Die wichtigsten Konsequenzen sind: 1. Der MeSH-Thesaurus hat sich durch drei unterschiedliche Phasen jeweils logarithmisch entwickelt. Solch einen Thesaurus sollte folgenden Gleichung folgen: "T = 3.076,6 Ln (d) - 22.695 + 0,0039d" (T = Begriffe, Ln = natürlicher Logarithmus und d = Dokumente). Um solch einen Thesaurus zu konstruieren, muss man demnach etwa 1.600 Dokumente von unterschiedlichen Themen des Bereiches des Thesaurus haben. Die dynamische Entwicklung von Thesauri wie MeSH erfordert die Einführung eines neuen Begriffs pro Indexierung von 256 neuen Dokumenten. 2. Die Verteilung der Thesaurusbegriffe erbrachte drei Kategorien: starke, normale und selten verwendete Headings. Die letzte Gruppe ist in einer Testphase, während in der ersten und zweiten Kategorie die neu hinzukommenden Deskriptoren zu einem Thesauruswachstum führen. 3. Es gibt ein logarithmisches Verhältnis zwischen der Zahl von Index-Begriffen pro Aufsatz und dessen Seitenzahl für die Artikeln zwischen einer und einundzwanzig Seiten. 4. Zeitschriftenaufsätze, die in MEDLINE mit Abstracts erscheinen erhalten fast zwei Deskriptoren mehr. 5. Die Findablity der nicht-englisch sprachigen Dokumente in MEDLINE ist geringer als die englische Dokumente. 6. Aufsätze der Zeitschriften mit einem Impact Factor 0 bis fünfzehn erhalten nicht mehr Indexbegriffe als die der anderen von MEDINE erfassten Zeitschriften. 7. In einem Indexierungssystem haben unterschiedliche Zeitschriften mehr oder weniger Gewicht in ihrem Findability. Die Verteilung der Indexbegriffe pro Seite hat gezeigt, dass es bei MEDLINE drei Kategorien der Publikationen gibt. Außerdem gibt es wenige stark bevorzugten Zeitschriften."
  17. Gutierres Castanha, R.C.; Hilário, C.M.; Araújo, P.C. de; Cabrini Grácio, M.C.: Citation analysis of North American Symposium on Knowledge Organization (NASKO) Proceedings (2007-2015) (2017) 0.01
    0.0065218094 = product of:
      0.019565428 = sum of:
        0.019565428 = weight(_text_:science in 3863) [ClassicSimilarity], result of:
          0.019565428 = score(doc=3863,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.1455159 = fieldWeight in 3863, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3863)
      0.33333334 = coord(1/3)
    
    Abstract
    Knowledge Organization (KO) theoretical foundations are still being developed in a continuous process of epistemological, theoretical and methodological consolidation. The remarkable growth of scientific records has stimulated the analysis of this production and the creation of instruments to evaluate the behavior of science became indispensable. We propose the Domain Analysis of KO in North America through the citation analysis of North American Symposium on Knowledge Organization (NASKO) proceedings (2007 - 2015). We present the citation, co-citation and bibliographic coupling analysis to visualize and recognize the researchers that influence the scholarly communication in this domain. The most prolific authors through NASKO conferences are Smiraglia, Tennis, Green, Dousa, Grant Campbell, Pimentel, Beak, La Barre, Kipp and Fox. Regarding their theoretical references, Hjørland, Olson, Smiraglia, and Ranganathan are the authors who most inspired the event's studies. The co-citation network shows the highest frequency is between Olson and Mai, followed by Hjørland and Mai and Beghtol and Mai, consolidating Mai and Hjørland as the central authors of the theoretical references in NASKO. The strongest theoretical proximity in author bibliographic coupling network occurs between Fox and Tennis, Dousa and Tennis, Tennis and Smiraglia, Dousa and Beak, and Pimentel and Tennis, highlighting Tennis as central author, that interconnects the others in relation to KO theoretical references in NASKO. The North American chapter has demonstrated a strong scientific production as well as a high level of concern with theoretical and epistemological questions, gathering researchers from different countries, universities and knowledge areas.