Search (148 results, page 1 of 8)

  • × theme_ss:"Informetrie"
  1. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.31
    0.3052337 = product of:
      0.7122119 = sum of:
        0.23740397 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.23740397 = score(doc=2188,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.23740397 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.23740397 = score(doc=2188,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
        0.23740397 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.23740397 = score(doc=2188,freq=2.0), product of:
            0.31681007 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.037368443 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.42857143 = coord(3/7)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  2. Marx, W.: Wie mißt man Forschungsqualität? : der Science Citation Index - ein Maßstab für die Bewertung (1996) 0.04
    0.042024657 = product of:
      0.14708629 = sum of:
        0.10837604 = weight(_text_:interpretation in 5036) [ClassicSimilarity], result of:
          0.10837604 = score(doc=5036,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5063043 = fieldWeight in 5036, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0625 = fieldNorm(doc=5036)
        0.03871025 = product of:
          0.0774205 = sum of:
            0.0774205 = weight(_text_:anwendung in 5036) [ClassicSimilarity], result of:
              0.0774205 = score(doc=5036,freq=2.0), product of:
                0.1809185 = queryWeight, product of:
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.037368443 = queryNorm
                0.42793027 = fieldWeight in 5036, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.8414783 = idf(docFreq=948, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5036)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Abstract
    Ein überfordertes Gutachter-System, knapper fließende Forschungsgelder sowie die starke Faszination von Ranglisten bewirken zunehmend den Einsatz bibliometrischer Methoden zur Messung von Forschungsqualität. Grundlage der meisten Bewertungen ist der Science Citation Index, der nun auch in der Version als Online-Datenbank für umfangreiche Analysen genutzt werden kann. Erweiterungen der Retrievalsprache beim Host STN International ermöglichen statistische Analysen, die bisher nur dem SCI-Hersteller und wenigen Spezialisten vorbehalten waren. Voraussetzung für eine sinnvolle Anwendung sind vor allem die Wahl geeigneter Selektionskriterien sowie die sorgfältige Interpretation der Ergebnisse im Rahmen der Grenzen dieser Methoden
  3. Frandsen, T.F.; Nicolaisen, J.: Citation behavior : a large-scale test of the persuasion by name-dropping hypothesis (2017) 0.02
    0.020112088 = product of:
      0.1407846 = sum of:
        0.1407846 = weight(_text_:interpretation in 3601) [ClassicSimilarity], result of:
          0.1407846 = score(doc=3601,freq=6.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.65770864 = fieldWeight in 3601, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=3601)
      0.14285715 = coord(1/7)
    
    Abstract
    Citation frequencies are commonly interpreted as measures of quality or impact. Yet, the true nature of citations and their proper interpretation have been the center of a long, but still unresolved discussion in Bibliometrics. A comparison of 67,578 pairs of studies on the same healthcare topic, with the same publication age (1-15 years) reveals that when one of the studies is being selected for citation, it has on average received about three times as many citations as the other study. However, the average citation-gap between selected or deselected studies narrows slightly over time, which fits poorly with the name-dropping interpretation and better with the quality and impact-interpretation. The results demonstrate that authors in the field of Healthcare tend to cite highly cited documents when they have a choice. This is more likely caused by differences related to quality than differences related to status of the publications cited.
  4. Burrell, Q.L.: Some comments on "A proposal for a dynamic h-Type Index" by Rousseau and Ye (2009) 0.02
    0.0154822925 = product of:
      0.10837604 = sum of:
        0.10837604 = weight(_text_:interpretation in 2722) [ClassicSimilarity], result of:
          0.10837604 = score(doc=2722,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.5063043 = fieldWeight in 2722, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0625 = fieldNorm(doc=2722)
      0.14285715 = coord(1/7)
    
    Abstract
    Caution is urged over the adoption of dynamic h-type indexes as advocated by Rousseau and Ye (2008). It is shown that the dynamics are critically dependent upon model assumptions and that practical interpretation might therefore be problematic. However, interesting questions regarding the interrelations between various h-type indexes are raised.
  5. Zuccala, A.: Author cocitation analysis is to intellectual structure as Web colink analysis is to ... ? (2006) 0.01
    0.013684542 = product of:
      0.09579179 = sum of:
        0.09579179 = weight(_text_:interpretation in 6008) [ClassicSimilarity], result of:
          0.09579179 = score(doc=6008,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.44751403 = fieldWeight in 6008, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6008)
      0.14285715 = coord(1/7)
    
    Abstract
    Author Cocitation Analysis (ACA) and Web Colink Analysis (WCA) are examined as sister techniques in the related fields of bibliometrics and webometrics. Comparisons are made between the two techniques based on their data retrieval, mapping, and interpretation procedures, using mathematics as the subject in focus. An ACA is carried out and interpreted for a group of participants (authors) involved in an Isaac Newton Institute (2000) workshop-Singularity Theory and Its Applications to Wave Propagation Theory and Dynamical Systems-and compared/contrasted with a WCA for a list of international mathematics research institute home pages on the Web. Although the practice of ACA may be used to inform a WCA, the two techniques do not share many elements in common. The most important departure between ACA and WCA exists at the interpretive stage when ACA maps become meaningful in light of citation theory, and WCA maps require interpretation based on hyperlink theory. Much of the research concerning link theory and motivations for linking is still new; therefore further studies based on colinking are needed, mainly map-based studies, to understand what makes a Web colink structure meaningful.
  6. Waltman, L.; Calero-Medina, C.; Kosten, J.; Noyons, E.C.M.; Tijssen, R.J.W.; Eck, N.J. van; Leeuwen, T.N. van; Raan, A.F.J. van; Visser, M.S.; Wouters, P.: ¬The Leiden ranking 2011/2012 : data collection, indicators, and interpretation (2012) 0.01
    0.013684542 = product of:
      0.09579179 = sum of:
        0.09579179 = weight(_text_:interpretation in 514) [ClassicSimilarity], result of:
          0.09579179 = score(doc=514,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.44751403 = fieldWeight in 514, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=514)
      0.14285715 = coord(1/7)
    
    Abstract
    The Leiden Ranking 2011/2012 is a ranking of universities based on bibliometric indicators of publication output, citation impact, and scientific collaboration. The ranking includes 500 major universities from 41 different countries. This paper provides an extensive discussion of the Leiden Ranking 2011/2012. The ranking is compared with other global university rankings, in particular the Academic Ranking of World Universities (commonly known as the Shanghai Ranking) and the Times Higher Education World University Rankings. The comparison focuses on the methodological choices underlying the different rankings. Also, a detailed description is offered of the data collection methodology of the Leiden Ranking 2011/2012 and of the indicators used in the ranking. Various innovations in the Leiden Ranking 2011/2012 are presented. These innovations include (1) an indicator based on counting a university's highly cited publications, (2) indicators based on fractional rather than full counting of collaborative publications, (3) the possibility of excluding non-English language publications, and (4) the use of stability intervals. Finally, some comments are made on the interpretation of the ranking and a number of limitations of the ranking are pointed out.
  7. Riviera, E.: Testing the strength of the normative approach in citation theory through relational bibliometrics : the case of Italian sociology (2015) 0.01
    0.013684542 = product of:
      0.09579179 = sum of:
        0.09579179 = weight(_text_:interpretation in 1854) [ClassicSimilarity], result of:
          0.09579179 = score(doc=1854,freq=4.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.44751403 = fieldWeight in 1854, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1854)
      0.14285715 = coord(1/7)
    
    Abstract
    In scientometrics, citer behavior is traditionally investigated using one of two main approaches. According to the normative point of view, the behavior of scientists is regulated by norms that make the detection of citation patterns useful for the interpretation of bibliometric measures. According to the constructivist perspective, citer behavior is influenced by other factors linked to the social and/or psychological sphere that do not allow any statistical inferences that are useful for the purposes of interpretation. An intermediate position supports normative theories in describing citer behavior with respect to high citation frequencies and constructivist theories with respect to low citation counts. In this paper, this idea was tested in a case study of the Italian sociology community. Italian sociology is characterized by an unusual organization into three "political" or "ideological" camps, and belonging to one camp can be considered a potentially strong constructivist reason to cite. An all-author co-citation analysis was performed to map the structure of the Italian sociology community and look for evidence of three camps. We did not expect to find evidence of this configuration in the co-citation map. The map, in fact, included authors who obtained high citation counts that are supposedly produced by a normative-oriented behavior. The results confirmed this hypothesis and showed that the clusters seemed to be divided according to topic and not by camp. Relevant scientific works were cited by the members of the entire community regardless of their membership in any particular camp.
  8. Ivancheva, L.E.: ¬The non-Gaussian nature of bibliometric and scientometric distributions : a new approach to interpretation (2001) 0.01
    0.013547006 = product of:
      0.09482904 = sum of:
        0.09482904 = weight(_text_:interpretation in 6846) [ClassicSimilarity], result of:
          0.09482904 = score(doc=6846,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 6846, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6846)
      0.14285715 = coord(1/7)
    
  9. Morris, S.A.; Yen, G.; Wu, Z.; Asnake, B.: Time line visualization of research fronts (2003) 0.01
    0.013547006 = product of:
      0.09482904 = sum of:
        0.09482904 = weight(_text_:interpretation in 1452) [ClassicSimilarity], result of:
          0.09482904 = score(doc=1452,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.4430163 = fieldWeight in 1452, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1452)
      0.14285715 = coord(1/7)
    
    Abstract
    Research fronts, defined as clusters of documents that tend to cite a fixed, time invariant set of base documents, are plotted as time lines for visualization and exploration. Using a set of documents related to the subject of anthrax research, this article illustrates the construction, exploration, and interpretation of time lines for the purpose of identifying and visualizing temporal changes in research activity through journal articles. Such information is useful for presentation to meinbers of expert panels used for technology forecasting.
  10. Losee, R.M.: Term dependence : a basis for Luhn and Zipf models (2001) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 6976) [ClassicSimilarity], result of:
          0.081282035 = score(doc=6976,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 6976, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=6976)
      0.14285715 = coord(1/7)
    
    Abstract
    There are regularities in the statistical information provided by natural language terms about neighboring terms. We find that when phrase rank increases, moving from common to less common phrases, the value of the expected mutual information measure (EMIM) between the terms regularly decreases. Luhn's model suggests that midrange terms are the best index terms and relevance discriminators. We suggest reasons for this principle based on the empirical relationships shown here between the rank of terms within phrases and the average mutual information between terms, which we refer to as the Inverse Representation- EMIM principle. We also suggest an Inverse EMIM term weight for indexing or retrieval applications that is consistent with Luhn's distribution. An information theoretic interpretation of Zipf's Law is provided. Using the regularity noted here, we suggest that Zipf's Law is a consequence of the statistical dependencies that exist between terms, described here using information theoretic concepts.
  11. Thelwall, M.: ¬A comparison of sources of links for academic Web impact factor calculations (2002) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 4474) [ClassicSimilarity], result of:
          0.081282035 = score(doc=4474,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 4474, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=4474)
      0.14285715 = coord(1/7)
    
    Abstract
    There has been much recent interest in extracting information from collections of Web links. One tool that has been used is Ingwersen's Web impact factor. It has been demonstrated that several versions of this metric can produce results that correlate with research ratings of British universities showing that, despite being a measure of a purely Internet phenomenon, the results are susceptible to a wider interpretation. This paper addresses the question of which is the best possible domain to count backlinks from, if research is the focus of interest. WIFs for British universities calculated from several different source domains are compared, primarily the .edu, .ac.uk and .uk domains, and the entire Web. The results show that all four areas produce WIFs that correlate strongly with research ratings, but that none produce incontestably superior figures. It was also found that the WIF was less able to differentiate in more homogeneous subsets of universities, although positive results are still possible.
  12. Schreiber, M.: Fractionalized counting of publications for the g-Index (2009) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 3125) [ClassicSimilarity], result of:
          0.081282035 = score(doc=3125,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 3125, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=3125)
      0.14285715 = coord(1/7)
    
    Abstract
    L. Egghe ([2008]) studied the h-index (Hirsch index) and the g-index, counting the authorship of cited articles in a fractional way. But his definition of the gF-index for the case that the article count is fractionalized yielded values that were close to or even larger than the original g-index. Here I propose an alternative definition by which the g-index is modified in such a way that the resulting gm-index is always smaller than the original g-index. Based on the interpretation of the g-index as the highest number of articles of a scientist that received on average g or more citations, in the specification of the new gm-index the articles are counted fractionally not only for the rank but also for the average.
  13. Chen, C.; Ibekwe-SanJuan, F.; Hou, J.: ¬The structure and dynamics of cocitation clusters : a multiple-perspective cocitation analysis (2010) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 3591) [ClassicSimilarity], result of:
          0.081282035 = score(doc=3591,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 3591, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=3591)
      0.14285715 = coord(1/7)
    
    Abstract
    A multiple-perspective cocitation analysis method is introduced for characterizing and interpreting the structure and dynamics of cocitation clusters. The method facilitates analytic and sense making tasks by integrating network visualization, spectral clustering, automatic cluster labeling, and text summarization. Cocitation networks are decomposed into cocitation clusters. The interpretation of these clusters is augmented by automatic cluster labeling and summarization. The method focuses on the interrelations between a cocitation cluster's members and their citers. The generic method is applied to a three-part analysis of the field of information science as defined by 12 journals published between 1996 and 2008: (a) a comparative author cocitation analysis (ACA), (b) a progressive ACA of a time series of cocitation networks, and (c) a progressive document cocitation analysis (DCA). Results show that the multiple-perspective method increases the interpretability and accountability of both ACA and DCA networks.
  14. Zhao, D.; Strotmann, A.: In-text author citation analysis : feasibility, benefits, and limitations (2014) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 1535) [ClassicSimilarity], result of:
          0.081282035 = score(doc=1535,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 1535, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=1535)
      0.14285715 = coord(1/7)
    
    Abstract
    This article explores the feasibility, benefits, and limitations of in-text author citation analysis and tests how well it works compared with traditional author citation analysis using citation databases. In-text author citation analysis refers to author-based citation analysis using in-text citation data from full-text articles rather than reference data from citation databases. It has the potential to help with the application of citation analysis to research fields such as the social sciences that are not covered well by citation databases and to support weighted citation and cocitation counting for improved citation analysis results. We found that in-text author citation analysis can work as well as traditional citation analysis using citation databases for both author ranking and mapping if author name disambiguation is performed properly. Using in-text citation data without any author name disambiguation, ranking authors by citations is useless, whereas cocitation analysis works well for identifying major specialties and their interrelationships with cautions required for the interpretation of small research areas and some authors' memberships in specialties.
  15. Fang, Z.; Dudek, J.; Costas, R.: Facing the volatility of tweets in altmetric research (2022) 0.01
    0.01161172 = product of:
      0.081282035 = sum of:
        0.081282035 = weight(_text_:interpretation in 605) [ClassicSimilarity], result of:
          0.081282035 = score(doc=605,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.37972826 = fieldWeight in 605, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.046875 = fieldNorm(doc=605)
      0.14285715 = coord(1/7)
    
    Abstract
    The data re-collection for tweets from data snapshots is a common methodological step in Twitter-based research. Understanding better the volatility of tweets over time is important for validating the reliability of metrics based on Twitter data. We tracked a set of 37,918 original scholarly tweets mentioning COVID-19-related research daily for 56 days and captured the reasons for the changes in their availability over time. Results show that the proportion of unavailable tweets increased from 1.6 to 2.6% in the time window observed. Of the 1,323 tweets that became unavailable at some point in the period observed, 30.5% became available again afterwards. "Revived" tweets resulted mainly from the unprotecting, reactivating, or unsuspending of users' accounts. Our findings highlight the importance of noting this dynamic nature of Twitter data in altmetric research and testify to the challenges that this poses for the retrieval, processing, and interpretation of Twitter data about scientific papers.
  16. Egghe, L.: ¬The power of power laws and an interpretation of Lotkaian informetric systems as self-similar fractals (2005) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 3466) [ClassicSimilarity], result of:
          0.067735024 = score(doc=3466,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 3466, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3466)
      0.14285715 = coord(1/7)
    
  17. Thelwall, M.; Vaughan, L.; Björneborn, L.: Webometrics (2004) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 4279) [ClassicSimilarity], result of:
          0.067735024 = score(doc=4279,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 4279, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4279)
      0.14285715 = coord(1/7)
    
    Abstract
    Webometrics, the quantitative study of Web-related phenomena, emerged from the realization that methods originally designed for bibliometric analysis of scientific journal article citation patterns could be applied to the Web, with commercial search engines providing the raw data. Almind and Ingwersen (1997) defined the field and gave it its name. Other pioneers included Rodriguez Gairin (1997) and Aguillo (1998). Larson (1996) undertook exploratory link structure analysis, as did Rousseau (1997). Webometrics encompasses research from fields beyond information science such as communication studies, statistical physics, and computer science. In this review we concentrate on link analysis, but also cover other aspects of webometrics, including Web log fle analysis. One theme that runs through this chapter is the messiness of Web data and the need for data cleansing heuristics. The uncontrolled Web creates numerous problems in the interpretation of results, for instance, from the automatic creation or replication of links. The loose connection between top-level domain specifications (e.g., com, edu, and org) and their actual content is also a frustrating problem. For example, many .com sites contain noncommercial content, although com is ostensibly the main commercial top-level domain. Indeed, a skeptical researcher could claim that obstacles of this kind are so great that all Web analyses lack value. As will be seen, one response to this view, a view shared by critics of evaluative bibliometrics, is to demonstrate that Web data correlate significantly with some non-Web data in order to prove that the Web data are not wholly random. A practical response has been to develop increasingly sophisticated data cleansing techniques and multiple data analysis methods.
  18. Wallace, M.L.; Gingras, Y.; Duhon, R.: ¬A new approach for detecting scientific specialties from raw cocitation networks (2009) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 2709) [ClassicSimilarity], result of:
          0.067735024 = score(doc=2709,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 2709, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2709)
      0.14285715 = coord(1/7)
    
    Abstract
    We use a technique recently developed by V. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre (2008) to detect scientific specialties from author cocitation networks. This algorithm has distinct advantages over most previous methods used to obtain cocitation clusters since it avoids the use of similarity measures, relies entirely on the topology of the weighted network, and can be applied to relatively large networks. Most importantly, it requires no subjective interpretation of the cocitation data or of the communities found. Using two examples, we show that the resulting specialties are the smallest coherent groups of researchers (within a hierarchy of cluster sizes) and can thus be identified unambiguously. Furthermore, we confirm that these communities are indeed representative of what we know about the structure of a given scientific discipline and that as specialties, they can be accurately characterized by a few keywords (from the publication titles). We argue that this robust and efficient algorithm is particularly well-suited to cocitation networks and that the results generated can be of great use to researchers studying various facets of the structure and evolution of science.
  19. Bornmann, L.; Moya Anegón, F.de: What proportion of excellent papers makes an institution one of the best worldwide? : Specifying thresholds for the interpretation of the results of the SCImago Institutions Ranking and the Leiden Ranking (2014) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 1235) [ClassicSimilarity], result of:
          0.067735024 = score(doc=1235,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 1235, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1235)
      0.14285715 = coord(1/7)
    
  20. Ye, F.Y.; Leydesdorff, L.: ¬The "academic trace" of the performance matrix : a mathematical synthesis of the h-index and the integrated impact indicator (I3) (2014) 0.01
    0.009676432 = product of:
      0.067735024 = sum of:
        0.067735024 = weight(_text_:interpretation in 1237) [ClassicSimilarity], result of:
          0.067735024 = score(doc=1237,freq=2.0), product of:
            0.21405315 = queryWeight, product of:
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.037368443 = queryNorm
            0.3164402 = fieldWeight in 1237, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.7281795 = idf(docFreq=390, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1237)
      0.14285715 = coord(1/7)
    
    Abstract
    The h-index provides us with 9 natural classes which can be written as a matrix of 3 vectors. The 3 vectors are: X = (X1, X2, X3) and indicates publication distribution in the h-core, the h-tail, and the uncited ones, respectively; Y = (Y1, Y2, Y3) denotes the citation distribution of the h-core, the h-tail and the so-called "excess" citations (above the h-threshold), respectively; and Z = (Z1, Z2, Z3) = (Y1-X1, Y2-X2, Y3-X3). The matrix V = (X,Y,Z)T constructs a measure of academic performance, in which the 9 numbers can all be provided with meanings in different dimensions. The "academic trace" tr(V) of this matrix follows naturally, and contributes a unique indicator for total academic achievements by summarizing and weighting the accumulation of publications and citations. This measure can also be used to combine the advantages of the h-index and the integrated impact indicator (I3) into a single number with a meaningful interpretation of the values. We illustrate the use of tr(V) for the cases of 2 journal sets, 2 universities, and ourselves as 2 individual authors.

Years

Languages

  • e 131
  • d 14
  • m 1
  • ro 1
  • More… Less…

Types

  • a 142
  • el 4
  • s 3
  • m 2
  • r 1
  • x 1
  • More… Less…