Search (4 results, page 1 of 1)

  • × author_ss:"Herrera, F."
  • × theme_ss:"Informetrie"
  1. Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F.: Science mapping software tools : review, analysis, and cooperative study among tools (2011) 0.00
    0.0031564306 = product of:
      0.009469291 = sum of:
        0.009469291 = product of:
          0.018938582 = sum of:
            0.018938582 = weight(_text_:of in 4486) [ClassicSimilarity], result of:
              0.018938582 = score(doc=4486,freq=8.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.27643585 = fieldWeight in 4486, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4486)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Science mapping aims to build bibliometric maps that describe how specific disciplines, scientific domains, or research fields are conceptually, intellectually, and socially structured. Different techniques and software tools have been proposed to carry out science mapping analysis. The aim of this article is to review, analyze, and compare some of these software tools, taking into account aspects such as the bibliometric techniques available and the different kinds of analysis.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.7, S.1382-1402
  2. Alonso, S.; Cabrerizo, F.J.; Herrera-Viedma, E.; Herrera, F.: WoS query partitioner : a tool to retrieve very large numbers of items from the Web of Science using different source-based partitioning approaches (2010) 0.00
    0.0031192217 = product of:
      0.009357665 = sum of:
        0.009357665 = product of:
          0.01871533 = sum of:
            0.01871533 = weight(_text_:of in 3701) [ClassicSimilarity], result of:
              0.01871533 = score(doc=3701,freq=20.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.27317715 = fieldWeight in 3701, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3701)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Thomson Reuters' Web of Science (WoS) is undoubtedly a great tool for scientiometrics purposes. It allows one to retrieve and compute different measures such as the total number of papers that satisfy a particular condition; however, it also is well known that this tool imposes several different restrictions that make obtaining certain results difficult. One of those constraints is that the tool does not offer the total count of documents in a dataset if it is larger than 100,000 items. In this article, we propose and analyze different approaches that involve partitioning the search space (using the Source field) to retrieve item counts for very large datasets from the WoS. The proposed techniques improve previous approaches: They do not need any extra information about the retrieved dataset (thus allowing completely automatic procedures to retrieve the results), they are designed to avoid many of the restrictions imposed by the WoS, and they can be easily applied to almost any query. Finally, a description of WoS Query Partitioner, a freely available and online interactive tool that implements those techniques, is presented.
    Object
    Web of Science
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.8, S.1564-1581
  3. Torres-Salinas, D.; Robinson-García, N.; Jiménez-Contreras, E.; Herrera, F.; López-Cózar, E.D.: On the use of biplot analysis for multivariate bibliometric and scientific indicators (2013) 0.00
    0.0027899165 = product of:
      0.008369749 = sum of:
        0.008369749 = product of:
          0.016739499 = sum of:
            0.016739499 = weight(_text_:of in 972) [ClassicSimilarity], result of:
              0.016739499 = score(doc=972,freq=16.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.24433708 = fieldWeight in 972, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=972)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Bibliometric mapping and visualization techniques represent one of the main pillars in the field of scientometrics. Traditionally, the main methodologies employed for representing data are multidimensional scaling, principal component analysis, or correspondence analysis. In this paper we aim to present a visualization methodology known as biplot analysis for representing bibliometric and science and technology indicators. A biplot is a graphical representation of multivariate data, where the elements of a data matrix are represented according to dots and vectors associated with the rows and columns of the matrix. In this paper, we explore the possibilities of applying biplot analysis in the research policy area. More specifically, we first describe and introduce the reader to this methodology and secondly, we analyze its strengths and weaknesses through 3 different case studies: countries, universities, and scientific fields. For this, we use a biplot analysis known as JK-biplot. Finally, we compare the biplot representation with other multivariate analysis techniques. We conclude that biplot analysis could be a useful technique in scientometrics when studying multivariate data, as well as an easy-to-read tool for research decision makers.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.7, S.1468-1479
  4. Cobo, M.J.; López-Herrera, A.G.; Herrera-Viedma, E.; Herrera, F.: SciMAT: A new science mapping analysis software tool (2012) 0.00
    0.0027618767 = product of:
      0.00828563 = sum of:
        0.00828563 = product of:
          0.01657126 = sum of:
            0.01657126 = weight(_text_:of in 373) [ClassicSimilarity], result of:
              0.01657126 = score(doc=373,freq=8.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.24188137 = fieldWeight in 373, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=373)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This article presents a new open-source software tool, SciMAT, which performs science mapping analysis within a longitudinal framework. It provides different modules that help the analyst to carry out all the steps of the science mapping workflow. In addition, SciMAT presents three key features that are remarkable in respect to other science mapping software tools: (a) a powerful preprocessing module to clean the raw bibliographical data, (b) the use of bibliometric measures to study the impact of each studied element, and (c) a wizard to configure the analysis.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.8, S.1609-1630