Search (4 results, page 1 of 1)

  • × author_ss:"Herrera-Viedma, E."
  1. Peis, E.; Herrera-Viedma, E.; Herrera, J.C.: On the evaluation of XML documents using Fuzzy linguistic techniques (2003) 0.01
    0.011601887 = product of:
      0.081213206 = sum of:
        0.081213206 = weight(_text_:great in 2778) [ClassicSimilarity], result of:
          0.081213206 = score(doc=2778,freq=2.0), product of:
            0.21757144 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03863967 = queryNorm
            0.37327147 = fieldWeight in 2778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2778)
      0.14285715 = coord(1/7)
    
    Abstract
    Recommender systems evaluate and filter the great amount of information available an the Web to assist people in their search processes. A fuzzy evaluation method of XML documents based an computing with words is presented. Given an XML document type (e.g. scientific article), we consider that its elements are not equally informative. This is indicated by the use of a DTD and defining linguistic importance attributes to the more meaningful elements of the DTD designed. Then, the evaluation method generates linguistic recommendations from linguistic evaluation judgements provided by different recommenders an meaningful elements of DTD.
  2. Alonso, S.; Cabrerizo, F.J.; Herrera-Viedma, E.; Herrera, F.: WoS query partitioner : a tool to retrieve very large numbers of items from the Web of Science using different source-based partitioning approaches (2010) 0.01
    0.009668238 = product of:
      0.06767767 = sum of:
        0.06767767 = weight(_text_:great in 3701) [ClassicSimilarity], result of:
          0.06767767 = score(doc=3701,freq=2.0), product of:
            0.21757144 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.03863967 = queryNorm
            0.31105953 = fieldWeight in 3701, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3701)
      0.14285715 = coord(1/7)
    
    Abstract
    Thomson Reuters' Web of Science (WoS) is undoubtedly a great tool for scientiometrics purposes. It allows one to retrieve and compute different measures such as the total number of papers that satisfy a particular condition; however, it also is well known that this tool imposes several different restrictions that make obtaining certain results difficult. One of those constraints is that the tool does not offer the total count of documents in a dataset if it is larger than 100,000 items. In this article, we propose and analyze different approaches that involve partitioning the search space (using the Source field) to retrieve item counts for very large datasets from the WoS. The proposed techniques improve previous approaches: They do not need any extra information about the retrieved dataset (thus allowing completely automatic procedures to retrieve the results), they are designed to avoid many of the restrictions imposed by the WoS, and they can be easily applied to almost any query. Finally, a description of WoS Query Partitioner, a freely available and online interactive tool that implements those techniques, is presented.
  3. Herrera-Viedma, E.; Pasi, G.; Lopez-Herrera, A.G.; Porcel; C.: Evaluating the information quality of Web sites : a methodology based on fuzzy computing with words (2006) 0.00
    0.0018696935 = product of:
      0.013087854 = sum of:
        0.013087854 = product of:
          0.026175708 = sum of:
            0.026175708 = weight(_text_:22 in 5286) [ClassicSimilarity], result of:
              0.026175708 = score(doc=5286,freq=2.0), product of:
                0.13530953 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03863967 = queryNorm
                0.19345059 = fieldWeight in 5286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5286)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 7.2006 17:05:46
  4. Herrera-Viedma, E.; Pasi, G.: Soft approaches to information retrieval and information access on the Web : an introduction to the special topic section (2006) 0.00
    0.0014957548 = product of:
      0.010470283 = sum of:
        0.010470283 = product of:
          0.020940566 = sum of:
            0.020940566 = weight(_text_:22 in 5285) [ClassicSimilarity], result of:
              0.020940566 = score(doc=5285,freq=2.0), product of:
                0.13530953 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03863967 = queryNorm
                0.15476047 = fieldWeight in 5285, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5285)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 7.2006 16:59:33