Search (6 results, page 1 of 1)

  • × author_ss:"Torres-Salinas, D."
  • × theme_ss:"Informetrie"
  1. Torres-Salinas, D.; Gorraiz, J.; Robinson-Garcia, N.: ¬The insoluble problems of books : what does Altmetric.com have to offer? (2018) 0.02
    0.016507268 = product of:
      0.04126817 = sum of:
        0.0072082467 = weight(_text_:a in 4633) [ClassicSimilarity], result of:
          0.0072082467 = score(doc=4633,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.13482209 = fieldWeight in 4633, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.03125 = fieldNorm(doc=4633)
        0.034059923 = sum of:
          0.0089308405 = weight(_text_:information in 4633) [ClassicSimilarity], result of:
            0.0089308405 = score(doc=4633,freq=4.0), product of:
              0.08139861 = queryWeight, product of:
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.046368346 = queryNorm
              0.10971737 = fieldWeight in 4633, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                1.7554779 = idf(docFreq=20772, maxDocs=44218)
                0.03125 = fieldNorm(doc=4633)
          0.025129084 = weight(_text_:22 in 4633) [ClassicSimilarity], result of:
            0.025129084 = score(doc=4633,freq=2.0), product of:
              0.16237405 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046368346 = queryNorm
              0.15476047 = fieldWeight in 4633, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=4633)
      0.4 = coord(2/5)
    
    Abstract
    Purpose The purpose of this paper is to analyze the capabilities, functionalities and appropriateness of Altmetric.com as a data source for the bibliometric analysis of books in comparison to PlumX. Design/methodology/approach The authors perform an exploratory analysis on the metrics the Altmetric Explorer for Institutions, platform offers for books. The authors use two distinct data sets of books. On the one hand, the authors analyze the Book Collection included in Altmetric.com. On the other hand, the authors use Clarivate's Master Book List, to analyze Altmetric.com's capabilities to download and merge data with external databases. Finally, the authors compare the findings with those obtained in a previous study performed in PlumX. Findings Altmetric.com combines and orderly tracks a set of data sources combined by DOI identifiers to retrieve metadata from books, being Google Books its main provider. It also retrieves information from commercial publishers and from some Open Access initiatives, including those led by university libraries, such as Harvard Library. We find issues with linkages between records and mentions or ISBN discrepancies. Furthermore, the authors find that automatic bots affect greatly Wikipedia mentions to books. The comparison with PlumX suggests that none of these tools provide a complete picture of the social attention generated by books and are rather complementary than comparable tools. Practical implications This study targets different audience which can benefit from the findings. First, bibliometricians and researchers who seek for alternative sources to develop bibliometric analyses of books, with a special focus on the Social Sciences and Humanities fields. Second, librarians and research managers who are the main clients to which these tools are directed. Third, Altmetric.com itself as well as other altmetric providers who might get a better understanding of the limitations users encounter and improve this promising tool. Originality/value This is the first study to analyze Altmetric.com's functionalities and capabilities for providing metric data for books and to compare results from this platform, with those obtained via PlumX.
    Date
    20. 1.2015 18:30:22
    Source
    Aslib journal of information management. 70(2018) no.6, S.691-707
    Type
    a
  2. Torres-Salinas, D.; Robinson-García, N.: ¬The time for bibliometric applications (2016) 0.01
    0.007058388 = product of:
      0.01764597 = sum of:
        0.008173384 = weight(_text_:a in 2763) [ClassicSimilarity], result of:
          0.008173384 = score(doc=2763,freq=2.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.15287387 = fieldWeight in 2763, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.09375 = fieldNorm(doc=2763)
        0.009472587 = product of:
          0.018945174 = sum of:
            0.018945174 = weight(_text_:information in 2763) [ClassicSimilarity], result of:
              0.018945174 = score(doc=2763,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.23274569 = fieldWeight in 2763, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.09375 = fieldNorm(doc=2763)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.4, S.1014-1015
    Type
    a
  3. García, J.A.; Rodríguez-Sánchez, R.; Fdez-Valdivia, J.; Robinson-García, N.; Torres-Salinas, D.: Mapping academic institutions according to their journal publication profile : Spanish universities as a case study (2012) 0.01
    0.006540462 = product of:
      0.016351154 = sum of:
        0.010769378 = weight(_text_:a in 500) [ClassicSimilarity], result of:
          0.010769378 = score(doc=500,freq=20.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.20142901 = fieldWeight in 500, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=500)
        0.0055817757 = product of:
          0.011163551 = sum of:
            0.011163551 = weight(_text_:information in 500) [ClassicSimilarity], result of:
              0.011163551 = score(doc=500,freq=4.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.13714671 = fieldWeight in 500, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=500)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    We introduce a novel methodology for mapping academic institutions based on their journal publication profiles. We believe that journals in which researchers from academic institutions publish their works can be considered as useful identifiers for representing the relationships between these institutions and establishing comparisons. However, when academic journals are used for research output representation, distinctions must be introduced between them, based on their value as institution descriptors. This leads us to the use of journal weights attached to the institution identifiers. Since a journal in which researchers from a large proportion of institutions published their papers may be a bad indicator of similarity between two academic institutions, it seems reasonable to weight it in accordance with how frequently researchers from different institutions published their papers in this journal. Cluster analysis can then be applied to group the academic institutions, and dendrograms can be provided to illustrate groups of institutions following agglomerative hierarchical clustering. In order to test this methodology, we use a sample of Spanish universities as a case study. We first map the study sample according to an institution's overall research output, then we use it for two scientific fields (Information and Communication Technologies, as well as Medicine and Pharmacology) as a means to demonstrate how our methodology can be applied, not only for analyzing institutions as a whole, but also in different disciplinary contexts.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.11, S.2328-2340
    Type
    a
  4. Torres-Salinas, D.; Robinson-García, N.; Jiménez-Contreras, E.; Herrera, F.; López-Cózar, E.D.: On the use of biplot analysis for multivariate bibliometric and scientific indicators (2013) 0.01
    0.005182888 = product of:
      0.012957219 = sum of:
        0.009010308 = weight(_text_:a in 972) [ClassicSimilarity], result of:
          0.009010308 = score(doc=972,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 972, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=972)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 972) [ClassicSimilarity], result of:
              0.007893822 = score(doc=972,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 972, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=972)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Bibliometric mapping and visualization techniques represent one of the main pillars in the field of scientometrics. Traditionally, the main methodologies employed for representing data are multidimensional scaling, principal component analysis, or correspondence analysis. In this paper we aim to present a visualization methodology known as biplot analysis for representing bibliometric and science and technology indicators. A biplot is a graphical representation of multivariate data, where the elements of a data matrix are represented according to dots and vectors associated with the rows and columns of the matrix. In this paper, we explore the possibilities of applying biplot analysis in the research policy area. More specifically, we first describe and introduce the reader to this methodology and secondly, we analyze its strengths and weaknesses through 3 different case studies: countries, universities, and scientific fields. For this, we use a biplot analysis known as JK-biplot. Finally, we compare the biplot representation with other multivariate analysis techniques. We conclude that biplot analysis could be a useful technique in scientometrics when studying multivariate data, as well as an easy-to-read tool for research decision makers.
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.7, S.1468-1479
    Type
    a
  5. Orduña-Malea, E.; Torres-Salinas, D.; López-Cózar, E.D.: Hyperlinks embedded in twitter as a proxy for total external in-links to international university websites (2015) 0.01
    0.005182888 = product of:
      0.012957219 = sum of:
        0.009010308 = weight(_text_:a in 2043) [ClassicSimilarity], result of:
          0.009010308 = score(doc=2043,freq=14.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.1685276 = fieldWeight in 2043, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2043)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 2043) [ClassicSimilarity], result of:
              0.007893822 = score(doc=2043,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 2043, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2043)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Twitter as a potential alternative source of external links for use in webometric analysis is analyzed because of its capacity to embed hyperlinks in different tweets. Given the limitations on searching Twitter's public application programming interface (API), we used the Topsy search engine as a source for compiling tweets. To this end, we took a global sample of 200 universities and compiled all the tweets with hyperlinks to any of these institutions. Further link data was obtained from alternative sources (MajesticSEO and OpenSiteExplorer) in order to compare the results. Thereafter, various statistical tests were performed to determine the correlation between the indicators and the possibility of predicting external links from the collected tweets. The results indicate a high volume of tweets, although they are skewed by the performance of specific universities and countries. The data provided by Topsy correlated significantly with all link indicators, particularly with OpenSiteExplorer (r?=?0.769). Finally, prediction models do not provide optimum results because of high error rates. We conclude that the use of Twitter (via Topsy) as a source of hyperlinks to universities produces promising results due to its high correlation with link indicators, though limited by policies and culture regarding use and presence in social networks.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.7, S.1447-1462
    Type
    a
  6. López-Cózar, E.D.; Robinson-García, N.R.; Torres-Salinas, D.: ¬The Google scholar experiment : how to index false papers and manipulate bibliometric indicators (2014) 0.00
    0.0035052493 = product of:
      0.008763123 = sum of:
        0.0048162127 = weight(_text_:a in 1213) [ClassicSimilarity], result of:
          0.0048162127 = score(doc=1213,freq=4.0), product of:
            0.053464882 = queryWeight, product of:
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.046368346 = queryNorm
            0.090081796 = fieldWeight in 1213, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.153047 = idf(docFreq=37942, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1213)
        0.003946911 = product of:
          0.007893822 = sum of:
            0.007893822 = weight(_text_:information in 1213) [ClassicSimilarity], result of:
              0.007893822 = score(doc=1213,freq=2.0), product of:
                0.08139861 = queryWeight, product of:
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.046368346 = queryNorm
                0.09697737 = fieldWeight in 1213, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.7554779 = idf(docFreq=20772, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1213)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Google Scholar has been well received by the research community. Its promises of free, universal, and easy access to scientific literature coupled with the perception that it covers the social sciences and the humanities better than other traditional multidisciplinary databases have contributed to the quick expansion of Google Scholar Citations and Google Scholar Metrics: 2 new bibliometric products that offer citation data at the individual level and at journal level. In this article, we show the results of an experiment undertaken to analyze Google Scholar's capacity to detect citation-counting manipulation. For this, we uploaded 6 documents to an institutional web domain that were authored by a fictitious researcher and referenced all the publications of the members of the EC3 research group at the University of Granada. The detection by Google Scholar of these papers caused an outburst in the number of citations included in the Google Scholar Citations profiles of the authors. We discuss the effects of such an outburst and how it could affect the future development of such products, at both the individual level and the journal level, especially if Google Scholar persists with its lack of transparency.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.3, S.446-454
    Type
    a