Search (8 results, page 1 of 1)

  • × theme_ss:"Informetrie"
  • × type_ss:"el"
  1. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.03
    0.03273274 = product of:
      0.06546548 = sum of:
        0.017165681 = weight(_text_:information in 3925) [ClassicSimilarity], result of:
          0.017165681 = score(doc=3925,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.19395474 = fieldWeight in 3925, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.078125 = fieldNorm(doc=3925)
        0.0482998 = product of:
          0.0965996 = sum of:
            0.0965996 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.0965996 = score(doc=3925,freq=4.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 7.2006 15:22:28
    Source
    Information Research. 6(2001), no.2
  2. Adler, R.; Ewing, J.; Taylor, P.: Citation statistics : A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS) (2008) 0.03
    0.0292297 = product of:
      0.0584594 = sum of:
        0.01151509 = weight(_text_:information in 2417) [ClassicSimilarity], result of:
          0.01151509 = score(doc=2417,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1301088 = fieldWeight in 2417, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2417)
        0.046944313 = weight(_text_:standards in 2417) [ClassicSimilarity], result of:
          0.046944313 = score(doc=2417,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.20891973 = fieldWeight in 2417, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0234375 = fieldNorm(doc=2417)
      0.5 = coord(2/4)
    
    Abstract
    Using citation data to assess research ultimately means using citation-based statistics to rank things.journals, papers, people, programs, and disciplines. The statistical tools used to rank these things are often misunderstood and misused. - For journals, the impact factor is most often used for ranking. This is a simple average derived from the distribution of citations for a collection of articles in the journal. The average captures only a small amount of information about that distribution, and it is a rather crude statistic. In addition, there are many confounding factors when judging journals by citations, and any comparison of journals requires caution when using impact factors. Using the impact factor alone to judge a journal is like using weight alone to judge a person's health. - For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. They believe that higher impact factors must mean higher citation counts. But this is often not the case! This is a pervasive misuse of statistics that needs to be challenged whenever and wherever it occurs. -For individual scientists, complete citation records can be difficult to compare. As a consequence, there have been attempts to find simple statistics that capture the full complexity of a scientist's citation record with a single number. The most notable of these is the h-index, which seems to be gaining in popularity. But even a casual inspection of the h-index and its variants shows that these are naive attempts to understand complicated citation records. While they capture a small amount of information about the distribution of a scientist's citations, they lose crucial information that is essential for the assessment of research.
    The validity of statistics such as the impact factor and h-index is neither well understood nor well studied. The connection of these statistics with research quality is sometimes established on the basis of "experience." The justification for relying on them is that they are "readily available." The few studies of these statistics that were done focused narrowly on showing a correlation with some other measure of quality rather than on determining how one can best derive useful information from citation data. We do not dismiss citation statistics as a tool for assessing the quality of research.citation data and statistics can provide some valuable information. We recognize that assessment must be practical, and for this reason easily-derived citation statistics almost surely will be part of the process. But citation data provide only a limited and incomplete view of research quality, and the statistics derived from citation data are sometimes poorly understood and misused. Research is too important to measure its value with only a single coarse tool. We hope those involved in assessment will read both the commentary and the details of this report in order to understand not only the limitations of citation statistics but also how better to use them. If we set high standards for the conduct of science, surely we should set equally high standards for assessing its quality.
  3. Gutierres Castanha, R.C.; Hilário, C.M.; Araújo, P.C. de; Cabrini Grácio, M.C.: Citation analysis of North American Symposium on Knowledge Organization (NASKO) Proceedings (2007-2015) (2017) 0.01
    0.009895581 = product of:
      0.039582323 = sum of:
        0.039582323 = product of:
          0.07916465 = sum of:
            0.07916465 = weight(_text_:organization in 3863) [ClassicSimilarity], result of:
              0.07916465 = score(doc=3863,freq=10.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.44041592 = fieldWeight in 3863, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3863)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    Knowledge Organization (KO) theoretical foundations are still being developed in a continuous process of epistemological, theoretical and methodological consolidation. The remarkable growth of scientific records has stimulated the analysis of this production and the creation of instruments to evaluate the behavior of science became indispensable. We propose the Domain Analysis of KO in North America through the citation analysis of North American Symposium on Knowledge Organization (NASKO) proceedings (2007 - 2015). We present the citation, co-citation and bibliographic coupling analysis to visualize and recognize the researchers that influence the scholarly communication in this domain. The most prolific authors through NASKO conferences are Smiraglia, Tennis, Green, Dousa, Grant Campbell, Pimentel, Beak, La Barre, Kipp and Fox. Regarding their theoretical references, Hjørland, Olson, Smiraglia, and Ranganathan are the authors who most inspired the event's studies. The co-citation network shows the highest frequency is between Olson and Mai, followed by Hjørland and Mai and Beghtol and Mai, consolidating Mai and Hjørland as the central authors of the theoretical references in NASKO. The strongest theoretical proximity in author bibliographic coupling network occurs between Fox and Tennis, Dousa and Tennis, Tennis and Smiraglia, Dousa and Beak, and Pimentel and Tennis, highlighting Tennis as central author, that interconnects the others in relation to KO theoretical references in NASKO. The North American chapter has demonstrated a strong scientific production as well as a high level of concern with theoretical and epistemological questions, gathering researchers from different countries, universities and knowledge areas.
    Content
    Beitrag bei: NASKO 2017: Visualizing Knowledge Organization: Bringing Focus to Abstract Realities. The sixth North American Symposium on Knowledge Organization (NASKO 2017), June 15-16, 2017, in Champaign, IL, USA.
  4. Momeni, F.; Mayr, P.: Analyzing the research output presented at European Networked Knowledge Organization Systems workshops (2000-2015) (2016) 0.01
    0.0076650837 = product of:
      0.030660335 = sum of:
        0.030660335 = product of:
          0.06132067 = sum of:
            0.06132067 = weight(_text_:organization in 3106) [ClassicSimilarity], result of:
              0.06132067 = score(doc=3106,freq=6.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.34114468 = fieldWeight in 3106, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3106)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    In this paper we analyze a major part of the research output of the Networked Knowledge Organization Systems (NKOS) community in the period 2000 to 2015 from a network analytical perspective. We fo- cus on the paper output presented at the European NKOS workshops in the last 15 years. Our open dataset, the "NKOS bibliography", includes 14 workshop agendas (ECDL 2000-2010, TPDL 2011-2015) and 4 special issues on NKOS (2001, 2004, 2006 and 2015) which cover 171 papers with 218 distinct authors in total. A focus of the analysis is the visualization of co-authorship networks in this interdisciplinary eld. We used standard network analytic measures like degree and betweenness centrality to de- scribe the co-authorship distribution in our NKOS dataset. We can see in our dataset that 15% (with degree=0) of authors had no co-authorship with others and 53% of them had a maximum of 3 cooperations with other authors. 32% had at least 4 co-authors for all of their papers. The NKOS co-author network in the "NKOS bibliography" is a typical co- authorship network with one relatively large component, many smaller components and many isolated co-authorships or triples.
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  5. Herb, U.: Überwachungskapitalismus und Wissenschaftssteuerung (2019) 0.00
    0.0034331365 = product of:
      0.013732546 = sum of:
        0.013732546 = weight(_text_:information in 5624) [ClassicSimilarity], result of:
          0.013732546 = score(doc=5624,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1551638 = fieldWeight in 5624, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=5624)
      0.25 = coord(1/4)
    
    Content
    Der Text ist eine überarbeitete Version des von Herb, U. (2018): Zwangsehen und Bastarde : Wohin steuert Big Data die Wissenschaft? In: Information - Wissenschaft & Praxis, 69(2-3), S. 81-88. DOI:10.1515/iwp-2018-0021.
  6. Krattenthaler, C.: Was der h-Index wirklich aussagt (2021) 0.00
    0.0034331365 = product of:
      0.013732546 = sum of:
        0.013732546 = weight(_text_:information in 407) [ClassicSimilarity], result of:
          0.013732546 = score(doc=407,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1551638 = fieldWeight in 407, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=407)
      0.25 = coord(1/4)
    
    Abstract
    Diese Note legt dar, dass der sogenannte h-Index (Hirschs bibliometrischer Index) im Wesentlichen dieselbe Information wiedergibt wie die Gesamtanzahl von Zitationen von Publikationen einer Autorin oder eines Autors, also ein nutzloser bibliometrischer Index ist. Dies basiert auf einem faszinierenden Satz der Wahrscheinlichkeitstheorie, der hier ebenfalls erläutert wird.
  7. Czaran, E.; Wolski, M.; Richardson, J.: Improving research impact through the use of media (2017) 0.00
    0.0025748524 = product of:
      0.01029941 = sum of:
        0.01029941 = weight(_text_:information in 5057) [ClassicSimilarity], result of:
          0.01029941 = score(doc=5057,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 5057, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5057)
      0.25 = coord(1/4)
    
    Source
    Open information science. 1(2017) no.1, S.41-55
  8. Scientometrics pioneer Eugene Garfield dies : Eugene Garfield, founder of the Institute for Scientific Information and The Scientist, has passed away at age 91 (2017) 0.00
    0.002124145 = product of:
      0.00849658 = sum of:
        0.00849658 = weight(_text_:information in 3460) [ClassicSimilarity], result of:
          0.00849658 = score(doc=3460,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.0960027 = fieldWeight in 3460, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3460)
      0.25 = coord(1/4)
    
    Content
    Vgl. auch Open Password, Nr.167 vom 01.03.2017 :"Eugene Garfield, Begründer und Pionier der Zitationsindexierung und der Ziationsanalyse, ohne den die Informationswissenschaft heute anders aussähe, ist im Alter von 91 Jahren gestorben. Er hinterlässt Frau, drei Söhne, eine Tochter, eine Stieftochter, zwei Enkeltöchter und zwei Großelternkinder. Garfield machte seinen ersten Abschluss als Bachelor in Chemie an der Columbia University in New York City im Jahre 1949. 1954 sattelte er einen Abschluss in Bibliothekswissenschaft drauf. 1961 sollte er im Fach strukturelle Linguistik promovieren. Als Chemie-Student war er nach eigenen Angaben weder besonders gut noch besonders glücklich. Sein "Erweckungserlebnis" hatte er auf einer Tagung der American Chemical Society, als er entdeckte, dass sich mit der Suche nach Literatur womöglich ein Lebensunterhalt bestreiten lasse. "So I went to the Chairman of the meeting and said: "How do you get a job in this racket?" Ab 1955 war Garfield zunächst als Berater für pharmazeutische Unternehmen tätig. Dort spezialisierte er sich auf Fachinformationen, indem er Inhalte relevanter Fachzeitschriften erarbeitete. 1955 schlug er in "Science" seine bahnbrechende Idee vor, Zitationen wissenschaftlicher Veröffentlichungen systematisch zu erfassen und Zusammenhänge zwischen Zitaten deutlich zu machen. 1960 gründete Garfield das Institute für Scientific Informationen, dessen CEO er bis 1992 blieb. 1964 brachte er den Scientific Information Index heraus. Weitere Maßgrößen wie der Social Science Index (ab 1973), der Arts and Humanities Citation Index (ab 1978) und der Journal Citation Index folgten. Diese Verzeichnisse wurden in dem "Web of Science" zusammengefasst und als Datenbank elektronisch zugänglich gemacht. Damit wurde es den Forschern ermöglich, die für sie relevante Literatur "at their fingertips" zu finden und sich in ihr zurechtzufinden. Darüber hinaus wurde es mit Hilfe der Rankings von Garfields Messgrößen möglich, die relative wissenschaftliche Bedeutung wissenschaftlicher Beiträge, Autoren, wissenschaftlicher Einrichtungen, Regionen und Länder zu messen.