Search (127 results, page 1 of 7)

  • × theme_ss:"Informetrie"
  1. Rotolo, D.; Rafols, I.; Hopkins, M.M.; Leydesdorff, L.: Strategic intelligence on emerging technologies : scientometric overlay mapping (2017) 0.04
    0.041222658 = product of:
      0.082445316 = sum of:
        0.082445316 = product of:
          0.16489063 = sum of:
            0.16489063 = weight(_text_:intelligence in 3322) [ClassicSimilarity], result of:
              0.16489063 = score(doc=3322,freq=6.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.6098877 = fieldWeight in 3322, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3322)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This paper examines the use of scientometric overlay mapping as a tool of "strategic intelligence" to aid the governing of emerging technologies. We develop an integrative synthesis of different overlay mapping techniques and associated perspectives on technological emergence across geographical, social, and cognitive spaces. To do so, we longitudinally analyze (with publication and patent data) three case studies of emerging technologies in the medical domain. These are RNA interference (RNAi), human papillomavirus (HPV) testing technologies for cervical cancer, and thiopurine methyltransferase (TPMT) genetic testing. Given the flexibility (i.e., adaptability to different sources of data) and granularity (i.e., applicability across multiple levels of data aggregation) of overlay mapping techniques, we argue that these techniques can favor the integration and comparison of results from different contexts and cases, thus potentially functioning as a platform for "distributed" strategic intelligence for analysts and decision makers.
  2. Rokach, L.; Kalech, M.; Blank, I.; Stern, R.: Who is going to win the next Association for the Advancement of Artificial Intelligence Fellowship Award? : evaluating researchers by mining bibliographic data (2011) 0.03
    0.034352217 = product of:
      0.068704434 = sum of:
        0.068704434 = product of:
          0.13740887 = sum of:
            0.13740887 = weight(_text_:intelligence in 4945) [ClassicSimilarity], result of:
              0.13740887 = score(doc=4945,freq=6.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.50823975 = fieldWeight in 4945, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4945)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Accurately evaluating a researcher and the quality of his or her work is an important task when decision makers have to decide on such matters as promotions and awards. Publications and citations play a key role in this task, and many previous studies have proposed using measurements based on them for evaluating researchers. Machine learning techniques as a way of enhancing the evaluating process have been relatively unexplored. We propose using a machine learning approach for evaluating researchers. In particular, the proposed method combines the outputs of three learning techniques (logistics regression, decision trees, and artificial neural networks) to obtain a unified prediction with improved accuracy. We conducted several experiments to evaluate the model's ability to: (a) classify researchers in the field of artificial intelligence as Association for the Advancement of Artificial Intelligence (AAAI) fellows and (b) predict the next AAAI fellowship winners. We show that both our classification and prediction methods are more accurate than are previous measurement methods, and reach a precision rate of 96% and a recall of 92%.
  3. Rokach, L.; Mitra, P.: Parsimonious citer-based measures : the artificial intelligence domain as a case study (2013) 0.03
    0.033658158 = product of:
      0.067316316 = sum of:
        0.067316316 = product of:
          0.13463263 = sum of:
            0.13463263 = weight(_text_:intelligence in 212) [ClassicSimilarity], result of:
              0.13463263 = score(doc=212,freq=4.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.49797118 = fieldWeight in 212, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.046875 = fieldNorm(doc=212)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article presents a new Parsimonious Citer-Based Measure for assessing the quality of academic papers. This new measure is parsimonious as it looks for the smallest set of citing authors (citers) who have read a certain paper. The Parsimonious Citer-Based Measure aims to address potential distortion in the values of existing citer-based measures. These distortions occur because of various factors, such as the practice of hyperauthorship. This new measure is empirically compared with existing measures, such as the number of citers and the number of citations in the field of artificial intelligence (AI). The results show that the new measure is highly correlated with those two measures. However, the new measure is more robust against citation manipulations and better differentiates between prominent and nonprominent AI researchers than the above-mentioned measures.
  4. Karki, M.M.S.: Patent citation analysis : a policy analysis tool (1997) 0.03
    0.031733215 = product of:
      0.06346643 = sum of:
        0.06346643 = product of:
          0.12693286 = sum of:
            0.12693286 = weight(_text_:intelligence in 2076) [ClassicSimilarity], result of:
              0.12693286 = score(doc=2076,freq=2.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.46949172 = fieldWeight in 2076, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2076)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Citation analysis of patents uses bibliometric techniques to analyze the wealth of information contained in patents. Describes the various facets of patent citations and patent citation studies and their important applications. Describes the construction of technology indicators based on patent citation analysis, including: identification of leading edge technological activity; measurement of national patent citation performance; competitive intelligence; linkages to science; measurement of foreign dependence; highly cited patents; and number of non patent links
  5. Herb, U.: Überwachungskapitalismus und Wissenschaftssteuerung (2019) 0.03
    0.031733215 = product of:
      0.06346643 = sum of:
        0.06346643 = product of:
          0.12693286 = sum of:
            0.12693286 = weight(_text_:intelligence in 5624) [ClassicSimilarity], result of:
              0.12693286 = score(doc=5624,freq=2.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.46949172 = fieldWeight in 5624, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5624)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Die Metamorphose des Wissenschaftsverlags Elsevier zum Research Intelligence Dienstleister ist paradigmatisch für die neuen Möglichkeiten der Protokollierung und Steuerung von Wissenschaft.
  6. Nicholls, P.T.: Empirical validation of Lotka's law (1986) 0.03
    0.027584694 = product of:
      0.05516939 = sum of:
        0.05516939 = product of:
          0.11033878 = sum of:
            0.11033878 = weight(_text_:22 in 5509) [ClassicSimilarity], result of:
              0.11033878 = score(doc=5509,freq=2.0), product of:
                0.17824122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050899457 = queryNorm
                0.61904186 = fieldWeight in 5509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5509)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 22(1986), S.417-419
  7. Nicolaisen, J.: Citation analysis (2007) 0.03
    0.027584694 = product of:
      0.05516939 = sum of:
        0.05516939 = product of:
          0.11033878 = sum of:
            0.11033878 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.11033878 = score(doc=6091,freq=2.0), product of:
                0.17824122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050899457 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 19:53:22
  8. Fiala, J.: Information flood : fiction and reality (1987) 0.03
    0.027584694 = product of:
      0.05516939 = sum of:
        0.05516939 = product of:
          0.11033878 = sum of:
            0.11033878 = weight(_text_:22 in 1080) [ClassicSimilarity], result of:
              0.11033878 = score(doc=1080,freq=2.0), product of:
                0.17824122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050899457 = queryNorm
                0.61904186 = fieldWeight in 1080, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=1080)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Thermochimica acta. 110(1987), S.11-22
  9. Su, Y.; Han, L.-F.: ¬A new literature growth model : variable exponential growth law of literature (1998) 0.02
    0.024381656 = product of:
      0.048763312 = sum of:
        0.048763312 = product of:
          0.097526625 = sum of:
            0.097526625 = weight(_text_:22 in 3690) [ClassicSimilarity], result of:
              0.097526625 = score(doc=3690,freq=4.0), product of:
                0.17824122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050899457 = queryNorm
                0.54716086 = fieldWeight in 3690, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3690)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1999 19:22:35
  10. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.024381656 = product of:
      0.048763312 = sum of:
        0.048763312 = product of:
          0.097526625 = sum of:
            0.097526625 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.097526625 = score(doc=3925,freq=4.0), product of:
                0.17824122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050899457 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  11. Diodato, V.: Dictionary of bibliometrics (1994) 0.02
    0.024136607 = product of:
      0.048273213 = sum of:
        0.048273213 = product of:
          0.09654643 = sum of:
            0.09654643 = weight(_text_:22 in 5666) [ClassicSimilarity], result of:
              0.09654643 = score(doc=5666,freq=2.0), product of:
                0.17824122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050899457 = queryNorm
                0.5416616 = fieldWeight in 5666, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5666)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: Journal of library and information science 22(1996) no.2, S.116-117 (L.C. Smith)
  12. Bookstein, A.: Informetric distributions : I. Unified overview (1990) 0.02
    0.024136607 = product of:
      0.048273213 = sum of:
        0.048273213 = product of:
          0.09654643 = sum of:
            0.09654643 = weight(_text_:22 in 6902) [ClassicSimilarity], result of:
              0.09654643 = score(doc=6902,freq=2.0), product of:
                0.17824122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050899457 = queryNorm
                0.5416616 = fieldWeight in 6902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6902)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:55:29
  13. Bookstein, A.: Informetric distributions : II. Resilience to ambiguity (1990) 0.02
    0.024136607 = product of:
      0.048273213 = sum of:
        0.048273213 = product of:
          0.09654643 = sum of:
            0.09654643 = weight(_text_:22 in 4689) [ClassicSimilarity], result of:
              0.09654643 = score(doc=4689,freq=2.0), product of:
                0.17824122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050899457 = queryNorm
                0.5416616 = fieldWeight in 4689, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4689)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:55:55
  14. Hudnut, S.K.: Finding answers by the numbers : statistical analysis of online search results (1993) 0.02
    0.023799911 = product of:
      0.047599822 = sum of:
        0.047599822 = product of:
          0.095199645 = sum of:
            0.095199645 = weight(_text_:intelligence in 555) [ClassicSimilarity], result of:
              0.095199645 = score(doc=555,freq=2.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.3521188 = fieldWeight in 555, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.046875 = fieldNorm(doc=555)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Online searchers today no longer limit themselves to locating references to articles. More and more, they are called upon to locate specific answers to questions such as: Who is my chief competitor for this technology? Who is publishing the most on this subject? What is the geographic distribution of this product? These questions demand answers, not necessarily from record content, but from statistical analysis of the terms in a set of records. Most online services now provide a tool for statistical analysis such as GET on Orbit, ZOOM on ESA/IRS and RANK/RANK FILES on Dialog. With these commands, users can analyze term frequency to extrapolate very precise answers to a wide range of questions. This paper discusses the many uses of term frequency analysis and how it can be applied to areas of competitive intelligence, market analysis, bibliometric analysis and improvements of search results. The applications are illustrated by examples from Dialog
  15. Davies, R.: Q-analysis : a methodology for librarianship and information science (1985) 0.02
    0.023799911 = product of:
      0.047599822 = sum of:
        0.047599822 = product of:
          0.095199645 = sum of:
            0.095199645 = weight(_text_:intelligence in 589) [ClassicSimilarity], result of:
              0.095199645 = score(doc=589,freq=2.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.3521188 = fieldWeight in 589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.046875 = fieldNorm(doc=589)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Q-analysis is a methodology for investigating a wide range of structural phenomena. Strutures are defined in terms of relations between members of sets and their salient features are revealed using techniques of algebraic topology. However, the basic method can be mastered by non-mathematicians. Q-analysis has been applied to problems as diverse as discovering the rules for the diagnosis of a rare disease and the study of tactics in a football match. Other applications include some of interest to librarians and information scientists. In bibliometrics, Q-analysis has proved capable of emulating techniques such as bibliographic coupling, co-citation analysis and co-word analysis. It has also been used to produce a classification scheme for television programmes based on different principles from most bibliographic classifications. This paper introduces the basic ideas of Q-analysis. Applications relevant to librarianship and information science are reviewed and present limitations of the approach described. New theoretical advances including some in other fields such as planning and design theory and artificial intelligence may lead to a still more powerful method of investigating structure
  16. Leydesdorff, L.: Why words and co-word cannot map the development of the science (1997) 0.02
    0.023799911 = product of:
      0.047599822 = sum of:
        0.047599822 = product of:
          0.095199645 = sum of:
            0.095199645 = weight(_text_:intelligence in 147) [ClassicSimilarity], result of:
              0.095199645 = score(doc=147,freq=2.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.3521188 = fieldWeight in 147, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.046875 = fieldNorm(doc=147)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Analyses and compares in term of co-occurrences and co-absenses of words in a restricted set of full-text articles from a sub-specialty of biochemistry. By using the distribution of words over the section, a clear distinction among 'theoretical' 'observation', and 'methodological' terminology can be made in individual articles. However, at the level of the set this structure is no longer retrieval: Words change both in terms of frequencies of relations with other words, and in terms of positional meaning from 1 text to another. The fluidity of networks in which nodes and links may chenge positions is ecpected to destabilise representations of developments of the sciences on the basis of co-occurrences and co-absenses of words. Discusses the consequences for the lexicographic approach to generating artificial intelligence from scientific texts
  17. Leydesdorff, L.; Goldstone, R.L.: Interdisciplinarity at the journal and specialty level : the changing knowledge bases of the journal cognitive science (2014) 0.02
    0.023799911 = product of:
      0.047599822 = sum of:
        0.047599822 = product of:
          0.095199645 = sum of:
            0.095199645 = weight(_text_:intelligence in 1187) [ClassicSimilarity], result of:
              0.095199645 = score(doc=1187,freq=2.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.3521188 = fieldWeight in 1187, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1187)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Using the referencing patterns in articles in Cognitive Science over three decades, we analyze the knowledge base of this literature in terms of its changing disciplinary composition. Three periods are distinguished: (A) construction of the interdisciplinary space in the 1980s, (B) development of an interdisciplinary orientation in the 1990s, and (C) reintegration into "cognitive psychology" in the 2000s. The fluidity and fuzziness of the interdisciplinary delineations in the different visualizations can be reduced and clarified using factor analysis. We also explore newly available routines ("CorText") to analyze this development in terms of "tubes" using an alluvial map and compare the results with an animation (using "Visone"). The historical specificity of this development can be compared with the development of "artificial intelligence" into an integrated specialty during this same period. Interdisciplinarity should be defined differently at the level of journals and of specialties.
  18. Lewison, G.: ¬The work of the Bibliometrics Research Group (City University) and associates (2005) 0.02
    0.02068852 = product of:
      0.04137704 = sum of:
        0.04137704 = product of:
          0.08275408 = sum of:
            0.08275408 = weight(_text_:22 in 4890) [ClassicSimilarity], result of:
              0.08275408 = score(doc=4890,freq=2.0), product of:
                0.17824122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050899457 = queryNorm
                0.46428138 = fieldWeight in 4890, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4890)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2007 17:02:22
  19. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.02
    0.02068852 = product of:
      0.04137704 = sum of:
        0.04137704 = product of:
          0.08275408 = sum of:
            0.08275408 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
              0.08275408 = score(doc=1239,freq=2.0), product of:
                0.17824122 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050899457 = queryNorm
                0.46428138 = fieldWeight in 1239, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1239)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    18. 3.2014 19:13:22
  20. Aledo, J.A.; Gámez, J.A.; Molina, D.; Rosete, A.: Consensus-based journal rankings : a complementary tool for bibliometric evaluation (2018) 0.02
    0.01983326 = product of:
      0.03966652 = sum of:
        0.03966652 = product of:
          0.07933304 = sum of:
            0.07933304 = weight(_text_:intelligence in 4364) [ClassicSimilarity], result of:
              0.07933304 = score(doc=4364,freq=2.0), product of:
                0.2703623 = queryWeight, product of:
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.050899457 = queryNorm
                0.29343233 = fieldWeight in 4364, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.3116927 = idf(docFreq=592, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4364)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Annual journal rankings are usually considered a tool for the evaluation of research and researchers. Although they are an objective resource for such evaluation, they also present drawbacks: (a) the uncertainty about the definite position of a target journal in the corresponding annual ranking when selecting a journal, and (b) in spite of the nonsignificant difference in score (for instance, impact factor) between consecutive journals in the ranking, the journals are strictly ranked and eventually placed in different terciles/quartiles, which may have a significant influence in the subsequent evaluation. In this article we present several proposals to obtain an aggregated consensus ranking as an alternative/complementary tool to standardize annual rankings. To illustrate the proposed methodology we use as a case study the Journal Citation Reports, and in particular the category of Computer Science: Artificial Intelligence (CS:AI). In the context of the consensus rankings obtained by the different methods, we discuss the convenience of using one or the other procedure according to the corresponding framework. In particular, our proposals allow us to obtain consensus rankings that avoid crisp frontiers between similarly ranked journals and consider the longitudinal/temporal evolution of the journals.

Years

Languages

  • e 117
  • d 9
  • ro 1
  • More… Less…

Types

  • a 125
  • el 2
  • m 2
  • s 1
  • More… Less…