Search (4 results, page 1 of 1)

  • × author_ss:"Herrera, F."
  • × theme_ss:"Informetrie"
  1. Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F.: SciMAT: A new science mapping analysis software tool (2012) 0.00
    0.0031324127 = product of:
      0.0062648254 = sum of:
        0.0062648254 = product of:
          0.012529651 = sum of:
            0.012529651 = weight(_text_:a in 373) [ClassicSimilarity], result of:
              0.012529651 = score(doc=373,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.23593865 = fieldWeight in 373, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=373)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article presents a new open-source software tool, SciMAT, which performs science mapping analysis within a longitudinal framework. It provides different modules that help the analyst to carry out all the steps of the science mapping workflow. In addition, SciMAT presents three key features that are remarkable in respect to other science mapping software tools: (a) a powerful preprocessing module to clean the raw bibliographical data, (b) the use of bibliometric measures to study the impact of each studied element, and (c) a wizard to configure the analysis.
    Type
    a
  2. Alonso, S.; Cabrerizo, F.J.; Herrera-Viedma, E.; Herrera, F.: WoS query partitioner : a tool to retrieve very large numbers of items from the Web of Science using different source-based partitioning approaches (2010) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 3701) [ClassicSimilarity], result of:
              0.00894975 = score(doc=3701,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 3701, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3701)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Thomson Reuters' Web of Science (WoS) is undoubtedly a great tool for scientiometrics purposes. It allows one to retrieve and compute different measures such as the total number of papers that satisfy a particular condition; however, it also is well known that this tool imposes several different restrictions that make obtaining certain results difficult. One of those constraints is that the tool does not offer the total count of documents in a dataset if it is larger than 100,000 items. In this article, we propose and analyze different approaches that involve partitioning the search space (using the Source field) to retrieve item counts for very large datasets from the WoS. The proposed techniques improve previous approaches: They do not need any extra information about the retrieved dataset (thus allowing completely automatic procedures to retrieve the results), they are designed to avoid many of the restrictions imposed by the WoS, and they can be easily applied to almost any query. Finally, a description of WoS Query Partitioner, a freely available and online interactive tool that implements those techniques, is presented.
    Type
    a
  3. Torres-Salinas, D.; Robinson-García, N.; Jiménez-Contreras, E.; Herrera, F.; López-Cózar, E.D.: On the use of biplot analysis for multivariate bibliometric and scientific indicators (2013) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 972) [ClassicSimilarity], result of:
              0.00894975 = score(doc=972,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 972, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=972)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Bibliometric mapping and visualization techniques represent one of the main pillars in the field of scientometrics. Traditionally, the main methodologies employed for representing data are multidimensional scaling, principal component analysis, or correspondence analysis. In this paper we aim to present a visualization methodology known as biplot analysis for representing bibliometric and science and technology indicators. A biplot is a graphical representation of multivariate data, where the elements of a data matrix are represented according to dots and vectors associated with the rows and columns of the matrix. In this paper, we explore the possibilities of applying biplot analysis in the research policy area. More specifically, we first describe and introduce the reader to this methodology and secondly, we analyze its strengths and weaknesses through 3 different case studies: countries, universities, and scientific fields. For this, we use a biplot analysis known as JK-biplot. Finally, we compare the biplot representation with other multivariate analysis techniques. We conclude that biplot analysis could be a useful technique in scientometrics when studying multivariate data, as well as an easy-to-read tool for research decision makers.
    Type
    a
  4. Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F.: Science mapping software tools : review, analysis, and cooperative study among tools (2011) 0.00
    0.001353075 = product of:
      0.00270615 = sum of:
        0.00270615 = product of:
          0.0054123 = sum of:
            0.0054123 = weight(_text_:a in 4486) [ClassicSimilarity], result of:
              0.0054123 = score(doc=4486,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.10191591 = fieldWeight in 4486, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4486)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a