Search (1 results, page 1 of 1)

  • × author_ss:"Ahmadi, K."
  1. Balakrishnan, V.; Ahmadi, K.; Ravana, S.D.: Improving retrieval relevance using users' explicit feedback (2016) 0.02
    0.020747647 = product of:
      0.051869117 = sum of:
        0.036783673 = weight(_text_:study in 2921) [ClassicSimilarity], result of:
          0.036783673 = score(doc=2921,freq=4.0), product of:
            0.1448085 = queryWeight, product of:
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.044537213 = queryNorm
            0.25401598 = fieldWeight in 2921, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2514048 = idf(docFreq=4653, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2921)
        0.015085445 = product of:
          0.03017089 = sum of:
            0.03017089 = weight(_text_:22 in 2921) [ClassicSimilarity], result of:
              0.03017089 = score(doc=2921,freq=2.0), product of:
                0.15596174 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044537213 = queryNorm
                0.19345059 = fieldWeight in 2921, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2921)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Purpose The purpose of this paper is to improve users' search results relevancy by manipulating their explicit feedback. Design/methodology/approach CoRRe - an explicit feedback model integrating three popular feedback, namely, Comment-Rating-Referral is proposed in this study. The model is further enhanced using case-based reasoning in retrieving the top-5 results. A search engine prototype was developed using Text REtrieval Conference as the document collection, and results were evaluated at three levels (i.e. top-5, 10 and 15). A user evaluation involving 28 students was administered, focussing on 20 queries. Findings Both Mean Average Precision and Normalized Discounted Cumulative Gain results indicate CoRRe to have the highest retrieval precisions at all the three levels compared to the other feedback models. Furthermore, independent t-tests showed the precision differences to be significant. Rating was found to be the most popular technique among the participants, producing the best precision compared to referral and comments. Research limitations/implications The findings suggest that search retrieval relevance can be significantly improved when users' explicit feedback are integrated, therefore web-based systems should find ways to manipulate users' feedback to provide better recommendations or search results to the users. Originality/value The study is novel in the sense that users' comment, rating and referral were taken into consideration to improve their overall search experience.
    Date
    20. 1.2015 18:30:22