Search (13 results, page 1 of 1)

  • × author_ss:"Moya-Anegón, F. de"
  1. Miguel, S.; Chinchilla-Rodriguez, Z.; Moya-Anegón, F. de: Open access and Scopus : a new approach to scientific visibility from the standpoint of access (2011) 0.00
    0.0026849252 = product of:
      0.0053698504 = sum of:
        0.0053698504 = product of:
          0.010739701 = sum of:
            0.010739701 = weight(_text_:a in 4460) [ClassicSimilarity], result of:
              0.010739701 = score(doc=4460,freq=14.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20223314 = fieldWeight in 4460, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4460)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The last few years have seen the emergence of several open access (OA) options in scholarly communication, which can be grouped broadly into two areas referred to as gold and green roads. Several recent studies have shown how large the extent of OA is, but there have been few studies showing the impact of OA in the visibility of journals covering all scientific fields and geographical regions. This research presents a series of informative analyses providing a broad overview of the degree of proliferation of OA journals in a data sample of about 17,000 active journals indexed in Scopus. This study shows a new approach to scientific visibility from a systematic combination of four databases: Scopus, the Directory of Open Access Journals, Rights Metadata for Open Archiving.
    Type
    a
  2. Gómez-Núñez, A.J.; Vargas-Quesada, B.; Moya-Anegón, F. de: Updating the SCImago journal and country rank classification : a new approach using Ward's clustering and alternative combination of citation measures (2016) 0.00
    0.0026742492 = product of:
      0.0053484985 = sum of:
        0.0053484985 = product of:
          0.010696997 = sum of:
            0.010696997 = weight(_text_:a in 2499) [ClassicSimilarity], result of:
              0.010696997 = score(doc=2499,freq=20.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.20142901 = fieldWeight in 2499, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2499)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study introduces a new proposal to refine the classification of the SCImago Journal and Country Rank (SJR) platform by using clustering techniques and an alternative combination of citation measures from an initial 18,891 SJR journal network. Thus, a journal-journal matrix including simultaneously fractionalized values of direct citation, cocitation, and coupling was symmetrized by cosine similarity and later transformed into distances before performing clustering. The results provided a new cluster-based subject structure comprising 290 clusters that emerge by executing Ward's clustering in two phases and using a mixed labeling procedure based on tf-idf scores of the original SJR category tags and significant words extracted from journal titles. In total, 13,716 SJR journals were classified using this new cluster-based scheme. Although more than 5,000 journals were omitted in the classification process, the method produced a consistent classification with a balanced structure of coherent and well-defined clusters, a moderated multiassignment of journals, and a softer concentration of journals over clusters than in the original SJR categories. New subject disciplines such as "nanoscience and nanotechnology" or "social work" were also detected, providing evidence of good performance of our approach in refining the journal classification and updating the subject classification structure.
    Type
    a
  3. López-Pujalte, C.; Guerrero-Bote, V.P.; Moya-Anegón, F. de: Order-based fitness functions for genetic algorithms applied to relevance feedback (2003) 0.00
    0.0025370158 = product of:
      0.0050740317 = sum of:
        0.0050740317 = product of:
          0.010148063 = sum of:
            0.010148063 = weight(_text_:a in 5154) [ClassicSimilarity], result of:
              0.010148063 = score(doc=5154,freq=18.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.19109234 = fieldWeight in 5154, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5154)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Lopez-Pujalte and Guerrero-Bote test a relevance feedback genetic algorithm while varying its order based fitness functions and generating a function based upon the Ide dec-hi method as a base line. Using the non-zero weighted term types assigned to the query, and to the initially retrieved set of documents, as genes, a chromosome of equal length is created for each. The algorithm is provided with the chromosomes for judged relevant documents, for judged irrelevant documents, and for the irrelevant documents with their terms negated. The algorithm uses random selection of all possible genes, but gives greater likelihood to those with higher fitness values. When the fittest chromosome of a previous population is eliminated it is restored while the least fittest of the new population is eliminated in its stead. A crossover probability of .8 and a mutation probability of .2 were used with 20 generations. Three fitness functions were utilized; the Horng and Yeh function which takes into account the position of relevant documents, and two new functions, one based on accumulating the cosine similarity for retrieved documents, the other on stored fixed-recall-interval precessions. The Cranfield collection was used with the first 15 documents retrieved from 33 queries chosen to have at least 3 relevant documents in the first 15 and at least 5 relevant documents not initially retrieved. Precision was calculated at fixed recall levels using the residual collection method which removes viewed documents. One of the three functions improved the original retrieval by127 percent, while the Ide dec-hi method provided a 120 percent improvement.
    Type
    a
  4. Guerrero Bote, V.P.; López-Pujalte, C.; Faba, C.; Reyes, M.J.; Zapica, F.; Moya-Anegón, F. de: Artificial neural networks applied to information retrieval (2003) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 2780) [ClassicSimilarity], result of:
              0.009076704 = score(doc=2780,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 2780, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2780)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Connectionist models or neural networksare a type of AI technique that is based an small interconnected processing nodes which yield an overall behaviour that is intelligent. They have a very broad utility. In IR, they have been used in filtering information, query expansion, relevance feedback, clustering terms or documents, the topological organization of documents, labeling groups of documents, interface design, reduction of document dimension, the classification of the terms in a brain-storming session, etc. The present work is a fairly exhaustive study and classification of the application of this type of technique to IR. For this purpose, we focus an the main publications in the area of IR and neural networks, as well as an some applications of our own design.
    Type
    a
  5. Galvez, C.; Moya-Anegón, F. de; Solana, V.H.: Term conflation methods in information retrieval : non-linguistic and linguistic approaches (2005) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 4394) [ClassicSimilarity], result of:
              0.009076704 = score(doc=4394,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 4394, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4394)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To propose a categorization of the different conflation procedures at the two basic approaches, non-linguistic and linguistic techniques, and to justify the application of normalization methods within the framework of linguistic techniques. Design/methodology/approach - Presents a range of term conflation methods, that can be used in information retrieval. The uniterm and multiterm variants can be considered equivalent units for the purposes of automatic indexing. Stemming algorithms, segmentation rules, association measures and clustering techniques are well evaluated non-linguistic methods, and experiments with these techniques show a wide variety of results. Alternatively, the lemmatisation and the use of syntactic pattern-matching, through equivalence relations represented in finite-state transducers (FST), are emerging methods for the recognition and standardization of terms. Findings - The survey attempts to point out the positive and negative effects of the linguistic approach and its potential as a term conflation method. Originality/value - Outlines the importance of FSTs for the normalization of term variants.
    Type
    a
  6. Faba-Pérez, C.; Zapico-Alonso, F.; Guerrero-Bote, V.P.; Moya-Anegón, F. de: Comparative analysis of webometric measurements in thematic environments (2005) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 3554) [ClassicSimilarity], result of:
              0.008202582 = score(doc=3554,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 3554, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3554)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    There have been many attempts to evaluate Web spaces an the basis of the information that they provide, their form or functionality, or even the importance given to each of them by the Web itself. The indicators that have been developed for this purpose fall into two groups: those based an the study of a Web space's formal characteristics, and those related to its link structure. In this study we examine most of the webometric indicators that have been proposed in the literature together with others of our own design by applying them to a set of thematically related Web spaces and analyzing the relationships between the different indicators.
    Type
    a
  7. Moya-Anegón, F. de; Vargas-Quesada, B.; Chinchilla-Rodríguez, Z.; Corera-Álvarez, E.; Munoz-Fernández, F.J.; Herrero-Solana, V.; SCImago Group: Visualizing the marrow of science (2007) 0.00
    0.0020506454 = product of:
      0.004101291 = sum of:
        0.004101291 = product of:
          0.008202582 = sum of:
            0.008202582 = weight(_text_:a in 1313) [ClassicSimilarity], result of:
              0.008202582 = score(doc=1313,freq=6.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1544581 = fieldWeight in 1313, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1313)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study proposes a new methodology that allows for the generation of scientograms of major scientific domains, constructed on the basis of cocitation of Institute of Scientific Information categories, and pruned using PathfinderNetwork, with a layout determined by algorithms of the spring-embedder type (Kamada-Kawai), then corroborated structurally by factor analysis. We present the complete scientogram of the world for the Year 2002. It integrates the natural sciences, the social sciences, and arts and humanities. Its basic structure and the essential relationships therein are revealed, allowing us to simultaneously analyze the macrostructure, microstructure, and marrow of worldwide scientific output.
    Type
    a
  8. Galvez, C.; Moya-Anegón, F. de: ¬An evaluation of conflation accuracy using finite-state transducers (2006) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 5599) [ClassicSimilarity], result of:
              0.008118451 = score(doc=5599,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 5599, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5599)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - To evaluate the accuracy of conflation methods based on finite-state transducers (FSTs). Design/methodology/approach - Incorrectly lemmatized and stemmed forms may lead to the retrieval of inappropriate documents. Experimental studies to date have focused on retrieval performance, but very few on conflation performance. The process of normalization we used involved a linguistic toolbox that allowed us to construct, through graphic interfaces, electronic dictionaries represented internally by FSTs. The lexical resources developed were applied to a Spanish test corpus for merging term variants in canonical lemmatized forms. Conflation performance was evaluated in terms of an adaptation of recall and precision measures, based on accuracy and coverage, not actual retrieval. The results were compared with those obtained using a Spanish version of the Porter algorithm. Findings - The conclusion is that the main strength of lemmatization is its accuracy, whereas its main limitation is the underanalysis of variant forms. Originality/value - The report outlines the potential of transducers in their application to normalization processes.
    Type
    a
  9. Guerrero Bote, V.P.; Olmeda-Gómez, C.; Moya-Anegón, F. de: Quantifying the benefits of international scientific collaboration (2013) 0.00
    0.0020296127 = product of:
      0.0040592253 = sum of:
        0.0040592253 = product of:
          0.008118451 = sum of:
            0.008118451 = weight(_text_:a in 618) [ClassicSimilarity], result of:
              0.008118451 = score(doc=618,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15287387 = fieldWeight in 618, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=618)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We analyze the benefits in terms of scientific impact deriving from international collaboration, examining both those for a country when it collaborates and also those for the other countries when they are collaborating with the former. The data show the more countries there are involved in the collaboration, the greater the gain in impact. Contrary to what we expected, the scientific impact of a country does not significantly influence the benefit it derives from collaboration, but does seem to positively influence the benefit obtained by the other countries collaborating with it. Although there was a weak correlation between these two classes of benefit, the countries with the highest impact were clear outliers from this correlation, tending to provide proportionally more benefit to their collaborating countries than they themselves obtained. Two surprising findings were the null benefit resulting from collaboration with Iran, and the small benefit resulting from collaboration with the United States despite its high impact.
    Type
    a
  10. Lopez-Pujalte, C.; Guerrero Bote, V.P.; Moya-Anegón, F. de: Evaluation of the application of genetic algorithms to relevance feedback (2003) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 2756) [ClassicSimilarity], result of:
              0.006765375 = score(doc=2756,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 2756, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2756)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We evaluated the different genetic algorithms applied to relevance feedback that are to be found in the literature and which follow the vector space model (the most commonly used model in this type of application). They were compared with a traditional relevance feedback algorithm - the Ide dec-hi method - since this had given the best results in the study of Salton & Buckley (1990) an this subject. The experiment was performed an the Cranfield collection, and the different algorithms were evaluated using the residual collection method (one of the most suitable methods for evaluating relevance feedback techniques). The results varied greatly depending an the fitness function that was used, from no improvement in some of the genetic algorithms, to a more than 127% improvement with one algorithm, surpassing even the traditional Ide dec-hi method. One can therefore conclude that genetic algorithms show great promise as an aid to implementing a truly effective information retrieval system.
    Type
    a
  11. Leydesdorff, L.; Moya-Anegón, F. de; Guerrero-Bote, V.P.: Journal maps, interactive overlays, and the measurement of interdisciplinarity on the basis of Scopus data (1996-2012) (2015) 0.00
    0.0016913437 = product of:
      0.0033826875 = sum of:
        0.0033826875 = product of:
          0.006765375 = sum of:
            0.006765375 = weight(_text_:a in 1814) [ClassicSimilarity], result of:
              0.006765375 = score(doc=1814,freq=8.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12739488 = fieldWeight in 1814, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1814)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Using Scopus data, we construct a global map of science based on aggregated journal-journal citations from 1996-2012 (N of journals?=?20,554). This base map enables users to overlay downloads from Scopus interactively. Using a single year (e.g., 2012), results can be compared with mappings based on the Journal Citation Reports at the Web of Science (N?=?10,936). The Scopus maps are more detailed at both the local and global levels because of their greater coverage, including, for example, the arts and humanities. The base maps can be interactively overlaid with journal distributions in sets downloaded from Scopus, for example, for the purpose of portfolio analysis. Rao-Stirling diversity can be used as a measure of interdisciplinarity in the sets under study. Maps at the global and the local level, however, can be very different because of the different levels of aggregation involved. Two journals, for example, can both belong to the humanities in the global map, but participate in different specialty structures locally. The base map and interactive tools are available online (with instructions) at http://www.leydesdorff.net/scopus_ovl.
    Type
    a
  12. López-Pujalte, C.; Guerrero-Bote, V.P.; Moya-Anegón, F. de: Genetic algorithms in relevance feedback : a second test and new contributions (2003) 0.00
    0.001674345 = product of:
      0.00334869 = sum of:
        0.00334869 = product of:
          0.00669738 = sum of:
            0.00669738 = weight(_text_:a in 1076) [ClassicSimilarity], result of:
              0.00669738 = score(doc=1076,freq=4.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.12611452 = fieldWeight in 1076, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1076)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a
  13. Leydesdorff, L.; Moya-Anegón, F. de; Nooy, W. de: Aggregated journal-journal citation relations in scopus and web of science matched and compared in terms of networks, maps, and interactive overlays (2016) 0.00
    8.4567186E-4 = product of:
      0.0016913437 = sum of:
        0.0016913437 = product of:
          0.0033826875 = sum of:
            0.0033826875 = weight(_text_:a in 3090) [ClassicSimilarity], result of:
              0.0033826875 = score(doc=3090,freq=2.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.06369744 = fieldWeight in 3090, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3090)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Type
    a