Search (4 results, page 1 of 1)

  • × author_ss:"Ravana, S.D."
  • × year_i:[2010 TO 2020}
  1. Ravana, S.D.; Taheri, M.S.; Rajagopal, P.: Document-based approach to improve the accuracy of pairwise comparison in evaluating information retrieval systems (2015) 0.03
    0.03482235 = product of:
      0.0696447 = sum of:
        0.0696447 = sum of:
          0.03890366 = weight(_text_:t in 2587) [ClassicSimilarity], result of:
            0.03890366 = score(doc=2587,freq=2.0), product of:
              0.17876579 = queryWeight, product of:
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.04537884 = queryNorm
              0.21762364 = fieldWeight in 2587, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2587)
          0.030741034 = weight(_text_:22 in 2587) [ClassicSimilarity], result of:
            0.030741034 = score(doc=2587,freq=2.0), product of:
              0.15890898 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04537884 = queryNorm
              0.19345059 = fieldWeight in 2587, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2587)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to propose a method to have more accurate results in comparing performance of the paired information retrieval (IR) systems with reference to the current method, which is based on the mean effectiveness scores of the systems across a set of identified topics/queries. Design/methodology/approach Based on the proposed approach, instead of the classic method of using a set of topic scores, the documents level scores are considered as the evaluation unit. These document scores are the defined document's weight, which play the role of the mean average precision (MAP) score of the systems as a significance test's statics. The experiments were conducted using the TREC 9 Web track collection. Findings The p-values generated through the two types of significance tests, namely the Student's t-test and Mann-Whitney show that by using the document level scores as an evaluation unit, the difference between IR systems is more significant compared with utilizing topic scores. Originality/value Utilizing a suitable test collection is a primary prerequisite for IR systems comparative evaluation. However, in addition to reusable test collections, having an accurate statistical testing is a necessity for these evaluations. The findings of this study will assist IR researchers to evaluate their retrieval systems and algorithms more accurately.
    Date
    20. 1.2015 18:30:22
  2. Balakrishnan, V.; Ahmadi, K.; Ravana, S.D.: Improving retrieval relevance using users' explicit feedback (2016) 0.03
    0.03482235 = product of:
      0.0696447 = sum of:
        0.0696447 = sum of:
          0.03890366 = weight(_text_:t in 2921) [ClassicSimilarity], result of:
            0.03890366 = score(doc=2921,freq=2.0), product of:
              0.17876579 = queryWeight, product of:
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.04537884 = queryNorm
              0.21762364 = fieldWeight in 2921, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.9394085 = idf(docFreq=2338, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2921)
          0.030741034 = weight(_text_:22 in 2921) [ClassicSimilarity], result of:
            0.030741034 = score(doc=2921,freq=2.0), product of:
              0.15890898 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04537884 = queryNorm
              0.19345059 = fieldWeight in 2921, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2921)
      0.5 = coord(1/2)
    
    Abstract
    Purpose The purpose of this paper is to improve users' search results relevancy by manipulating their explicit feedback. Design/methodology/approach CoRRe - an explicit feedback model integrating three popular feedback, namely, Comment-Rating-Referral is proposed in this study. The model is further enhanced using case-based reasoning in retrieving the top-5 results. A search engine prototype was developed using Text REtrieval Conference as the document collection, and results were evaluated at three levels (i.e. top-5, 10 and 15). A user evaluation involving 28 students was administered, focussing on 20 queries. Findings Both Mean Average Precision and Normalized Discounted Cumulative Gain results indicate CoRRe to have the highest retrieval precisions at all the three levels compared to the other feedback models. Furthermore, independent t-tests showed the precision differences to be significant. Rating was found to be the most popular technique among the participants, producing the best precision compared to referral and comments. Research limitations/implications The findings suggest that search retrieval relevance can be significantly improved when users' explicit feedback are integrated, therefore web-based systems should find ways to manipulate users' feedback to provide better recommendations or search results to the users. Originality/value The study is novel in the sense that users' comment, rating and referral were taken into consideration to improve their overall search experience.
    Date
    20. 1.2015 18:30:22
  3. Ravana, S.D.; Rajagopal, P.; Balakrishnan, V.: Ranking retrieval systems using pseudo relevance judgments (2015) 0.01
    0.010868597 = product of:
      0.021737194 = sum of:
        0.021737194 = product of:
          0.043474387 = sum of:
            0.043474387 = weight(_text_:22 in 2591) [ClassicSimilarity], result of:
              0.043474387 = score(doc=2591,freq=4.0), product of:
                0.15890898 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04537884 = queryNorm
                0.27358043 = fieldWeight in 2591, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2591)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22
    18. 9.2018 18:22:56
  4. Rajagopal, P.; Ravana, S.D.; Koh, Y.S.; Balakrishnan, V.: Evaluating the effectiveness of information retrieval systems using effort-based relevance judgment (2019) 0.01
    0.0076852585 = product of:
      0.015370517 = sum of:
        0.015370517 = product of:
          0.030741034 = sum of:
            0.030741034 = weight(_text_:22 in 5287) [ClassicSimilarity], result of:
              0.030741034 = score(doc=5287,freq=2.0), product of:
                0.15890898 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04537884 = queryNorm
                0.19345059 = fieldWeight in 5287, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5287)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    20. 1.2015 18:30:22