Search (5 results, page 1 of 1)

  • × author_ss:"López-Cózar, E.D."
  • × theme_ss:"Informetrie"
  1. Martín-Martín, A.; Ayllón, J.M.; López-Cózar, E.D.; Orduna-Malea, E.: Nature's top 100 Re-revisited (2015) 0.00
    0.0023919214 = product of:
      0.0047838427 = sum of:
        0.0047838427 = product of:
          0.009567685 = sum of:
            0.009567685 = weight(_text_:a in 2352) [ClassicSimilarity], result of:
              0.009567685 = score(doc=2352,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.18016359 = fieldWeight in 2352, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2352)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  2. Torres-Salinas, D.; Robinson-García, N.; Jiménez-Contreras, E.; Herrera, F.; López-Cózar, E.D.: On the use of biplot analysis for multivariate bibliometric and scientific indicators (2013) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 972) [ClassicSimilarity], result of:
              0.00894975 = score(doc=972,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 972, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=972)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Bibliometric mapping and visualization techniques represent one of the main pillars in the field of scientometrics. Traditionally, the main methodologies employed for representing data are multidimensional scaling, principal component analysis, or correspondence analysis. In this paper we aim to present a visualization methodology known as biplot analysis for representing bibliometric and science and technology indicators. A biplot is a graphical representation of multivariate data, where the elements of a data matrix are represented according to dots and vectors associated with the rows and columns of the matrix. In this paper, we explore the possibilities of applying biplot analysis in the research policy area. More specifically, we first describe and introduce the reader to this methodology and secondly, we analyze its strengths and weaknesses through 3 different case studies: countries, universities, and scientific fields. For this, we use a biplot analysis known as JK-biplot. Finally, we compare the biplot representation with other multivariate analysis techniques. We conclude that biplot analysis could be a useful technique in scientometrics when studying multivariate data, as well as an easy-to-read tool for research decision makers.
    Type
    a
  3. Orduña-Malea, E.; Torres-Salinas, D.; López-Cózar, E.D.: Hyperlinks embedded in twitter as a proxy for total external in-links to international university websites (2015) 0.00
    0.0022374375 = product of:
      0.004474875 = sum of:
        0.004474875 = product of:
          0.00894975 = sum of:
            0.00894975 = weight(_text_:a in 2043) [ClassicSimilarity], result of:
              0.00894975 = score(doc=2043,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1685276 = fieldWeight in 2043, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2043)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Twitter as a potential alternative source of external links for use in webometric analysis is analyzed because of its capacity to embed hyperlinks in different tweets. Given the limitations on searching Twitter's public application programming interface (API), we used the Topsy search engine as a source for compiling tweets. To this end, we took a global sample of 200 universities and compiled all the tweets with hyperlinks to any of these institutions. Further link data was obtained from alternative sources (MajesticSEO and OpenSiteExplorer) in order to compare the results. Thereafter, various statistical tests were performed to determine the correlation between the indicators and the possibility of predicting external links from the collected tweets. The results indicate a high volume of tweets, although they are skewed by the performance of specific universities and countries. The data provided by Topsy correlated significantly with all link indicators, particularly with OpenSiteExplorer (r?=?0.769). Finally, prediction models do not provide optimum results because of high error rates. We conclude that the use of Twitter (via Topsy) as a source of hyperlinks to universities produces promising results due to its high correlation with link indicators, though limited by policies and culture regarding use and presence in social networks.
    Type
    a
  4. Delgado-Quirós, L.; Aguillo, I.F.; Martín-Martín, A.; López-Cózar, E.D.; Orduña-Malea, E.; Ortega, J.L.: Why are these publications missing? : uncovering the reasons behind the exclusion of documents in free-access scholarly databases (2024) 0.00
    0.0014647468 = product of:
      0.0029294936 = sum of:
        0.0029294936 = product of:
          0.005858987 = sum of:
            0.005858987 = weight(_text_:a in 1201) [ClassicSimilarity], result of:
              0.005858987 = score(doc=1201,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.11032722 = fieldWeight in 1201, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1201)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study analyses the coverage of seven free-access bibliographic databases (Crossref, Dimensions-non-subscription version, Google Scholar, Lens, Microsoft Academic, Scilit, and Semantic Scholar) to identify the potential reasons that might cause the exclusion of scholarly documents and how they could influence coverage. To do this, 116 k randomly selected bibliographic records from Crossref were used as a baseline. API endpoints and web scraping were used to query each database. The results show that coverage differences are mainly caused by the way each service builds their databases. While classic bibliographic databases ingest almost the exact same content from Crossref (Lens and Scilit miss 0.1% and 0.2% of the records, respectively), academic search engines present lower coverage (Google Scholar does not find: 9.8%, Semantic Scholar: 10%, and Microsoft Academic: 12%). Coverage differences are mainly attributed to external factors, such as web accessibility and robot exclusion policies (39.2%-46%), and internal requirements that exclude secondary content (6.5%-11.6%). In the case of Dimensions, the only classic bibliographic database with the lowest coverage (7.6%), internal selection criteria such as the indexation of full books instead of book chapters (65%) and the exclusion of secondary content (15%) are the main motives of missing publications.
    Type
    a
  5. López-Cózar, E.D.; Robinson-García, N.R.; Torres-Salinas, D.: ¬The Google scholar experiment : how to index false papers and manipulate bibliometric indicators (2014) 0.00
    0.0011959607 = product of:
      0.0023919214 = sum of:
        0.0023919214 = product of:
          0.0047838427 = sum of:
            0.0047838427 = weight(_text_:a in 1213) [ClassicSimilarity], result of:
              0.0047838427 = score(doc=1213,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.090081796 = fieldWeight in 1213, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1213)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Google Scholar has been well received by the research community. Its promises of free, universal, and easy access to scientific literature coupled with the perception that it covers the social sciences and the humanities better than other traditional multidisciplinary databases have contributed to the quick expansion of Google Scholar Citations and Google Scholar Metrics: 2 new bibliometric products that offer citation data at the individual level and at journal level. In this article, we show the results of an experiment undertaken to analyze Google Scholar's capacity to detect citation-counting manipulation. For this, we uploaded 6 documents to an institutional web domain that were authored by a fictitious researcher and referenced all the publications of the members of the EC3 research group at the University of Granada. The detection by Google Scholar of these papers caused an outburst in the number of citations included in the Google Scholar Citations profiles of the authors. We discuss the effects of such an outburst and how it could affect the future development of such products, at both the individual level and the journal level, especially if Google Scholar persists with its lack of transparency.
    Type
    a