Search (4 results, page 1 of 1)

  • × author_ss:"Gonzalo, J."
  1. Rodríguez-Vidal, J.; Carrillo-de-Albornoz, J.; Gonzalo, J.; Plaza, L.: Authority and priority signals in automatic summary generation for online reputation management (2021) 0.00
    0.003271467 = product of:
      0.009814401 = sum of:
        0.009814401 = product of:
          0.019628802 = sum of:
            0.019628802 = weight(_text_:of in 213) [ClassicSimilarity], result of:
              0.019628802 = score(doc=213,freq=22.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.28651062 = fieldWeight in 213, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=213)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Online reputation management (ORM) comprises the collection of techniques that help monitoring and improving the public image of an entity (companies, products, institutions) on the Internet. The ORM experts try to minimize the negative impact of the information about an entity while maximizing the positive material for being more trustworthy to the customers. Due to the huge amount of information that is published on the Internet every day, there is a need to summarize the entire flow of information to obtain only those data that are relevant to the entities. Traditionally the automatic summarization task in the ORM scenario takes some in-domain signals into account such as popularity, polarity for reputation and novelty but exists other feature to be considered, the authority of the people. This authority depends on the ability to convince others and therefore to influence opinions. In this work, we propose the use of authority signals that measures the influence of a user jointly with (a) priority signals related to the ORM domain and (b) information regarding the different topics that influential people is talking about. Our results indicate that the use of authority signals may significantly improve the quality of the summaries that are automatically generated.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.5, S.583-594
  2. Rodríguez-Vidal, J.; Gonzalo, J.; Plaza, L.; Anaya Sánchez, H.: Automatic detection of influencers in social networks : authority versus domain signals (2019) 0.00
    0.0031192217 = product of:
      0.009357665 = sum of:
        0.009357665 = product of:
          0.01871533 = sum of:
            0.01871533 = weight(_text_:of in 5301) [ClassicSimilarity], result of:
              0.01871533 = score(doc=5301,freq=20.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.27317715 = fieldWeight in 5301, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5301)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Given the task of finding influencers (opinion makers) for a given domain in a social network, we investigate (a) what is the relative importance of domain and authority signals, (b) what is the most effective way of combining signals (voting, classification, learning to rank, etc.) and how best to model the vocabulary signal, and (c) how large is the gap between supervised and unsupervised methods and what are the practical consequences. Our best results on the RepLab dataset (which improves the state of the art) uses language models to learn the domain-specific vocabulary used by influencers and combines domain and authority models using a Learning to Rank algorithm. Our experiments show that (a) both authority and domain evidence can be trained from the vocabulary of influencers; (b) once the language of influencers is modeled as a likelihood signal, further supervised learning and additional network-based signals only provide marginal improvements; and (c) the availability of training data sets is crucial to obtain competitive results in the task. Our most remarkable finding is that influencers do use a distinctive vocabulary, which is a more reliable signal than nontextual network indicators such as the number of followers, retweets, and so on.
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.7, S.675-684
  3. López-Ostenero, F.; Peinado, V.; Gonzalo, J.; Verdejo, F.: Interactive question answering : Is Cross-Language harder than monolingual searching? (2008) 0.00
    0.0023673228 = product of:
      0.0071019684 = sum of:
        0.0071019684 = product of:
          0.014203937 = sum of:
            0.014203937 = weight(_text_:of in 2023) [ClassicSimilarity], result of:
              0.014203937 = score(doc=2023,freq=8.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.20732689 = fieldWeight in 2023, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2023)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Is Cross-Language answer finding harder than Monolingual answer finding for users? In this paper we provide initial quantitative and qualitative evidence to answer this question. In our study, which involves 16 users searching questions under four different system conditions, we find that interactive cross-language answer finding is not substantially harder (in terms of accuracy) than its monolingual counterpart, using general purpose Machine Translation systems and standard Information Retrieval machinery, although it takes more time. We have also seen that users need more context to provide accurate answers (full documents) than what is usually considered by systems (paragraphs or passages). Finally, we also discuss the limitations of standard evaluation methodologies for interactive Information Retrieval experiments in the case of cross-language question answering.
    Footnote
    Beitrag eines Themenbereichs: Evaluation of Interactive Information Retrieval Systems
  4. Lopez-Ostenero, F.; Gonzalo, J.; Verdejo, F.: Noun phrases as building blocks for cross-language search assistance (2005) 0.00
    0.0020501618 = product of:
      0.006150485 = sum of:
        0.006150485 = product of:
          0.01230097 = sum of:
            0.01230097 = weight(_text_:of in 1021) [ClassicSimilarity], result of:
              0.01230097 = score(doc=1021,freq=6.0), product of:
                0.06850986 = queryWeight, product of:
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.043811057 = queryNorm
                0.17955035 = fieldWeight in 1021, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.5637573 = idf(docFreq=25162, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1021)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    This paper presents a Foreign-Language Search Assistant that uses noun phrases as fundamental units for document translation and query formulation, translation and refinement. The system (a) supports the foreign-language document selection task providing a cross-language indicative summary based on noun phrase translations, and (b) supports query formulation and refinement using the information displayed in the cross-language document summaries. Our results challenge two implicit assumptions in most of cross-language Information Retrieval research: first, that once documents in the target language are found, Machine Translation is the optimal way of informing the user about their contents; and second, that in an interactive setting the optimal way of formulating and refining the query is helping the user to choose appropriate translations for the query terms.