Search (4 results, page 1 of 1)

  • × author_ss:"Ravana, S.D."
  • × type_ss:"a"
  • × year_i:[2010 TO 2020}
  1. Balakrishnan, V.; Ahmadi, K.; Ravana, S.D.: Improving retrieval relevance using users' explicit feedback (2016) 0.16
    0.16088547 = product of:
      0.21451396 = sum of:
        0.029088326 = weight(_text_:web in 2921) [ClassicSimilarity], result of:
          0.029088326 = score(doc=2921,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 2921, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2921)
        0.07377557 = weight(_text_:search in 2921) [ClassicSimilarity], result of:
          0.07377557 = score(doc=2921,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.4293381 = fieldWeight in 2921, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2921)
        0.11165006 = sum of:
          0.07815824 = weight(_text_:engine in 2921) [ClassicSimilarity], result of:
            0.07815824 = score(doc=2921,freq=2.0), product of:
              0.26447627 = queryWeight, product of:
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.049439456 = queryNorm
              0.29552078 = fieldWeight in 2921, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2921)
          0.03349182 = weight(_text_:22 in 2921) [ClassicSimilarity], result of:
            0.03349182 = score(doc=2921,freq=2.0), product of:
              0.17312855 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049439456 = queryNorm
              0.19345059 = fieldWeight in 2921, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2921)
      0.75 = coord(3/4)
    
    Abstract
    Purpose The purpose of this paper is to improve users' search results relevancy by manipulating their explicit feedback. Design/methodology/approach CoRRe - an explicit feedback model integrating three popular feedback, namely, Comment-Rating-Referral is proposed in this study. The model is further enhanced using case-based reasoning in retrieving the top-5 results. A search engine prototype was developed using Text REtrieval Conference as the document collection, and results were evaluated at three levels (i.e. top-5, 10 and 15). A user evaluation involving 28 students was administered, focussing on 20 queries. Findings Both Mean Average Precision and Normalized Discounted Cumulative Gain results indicate CoRRe to have the highest retrieval precisions at all the three levels compared to the other feedback models. Furthermore, independent t-tests showed the precision differences to be significant. Rating was found to be the most popular technique among the participants, producing the best precision compared to referral and comments. Research limitations/implications The findings suggest that search retrieval relevance can be significantly improved when users' explicit feedback are integrated, therefore web-based systems should find ways to manipulate users' feedback to provide better recommendations or search results to the users. Originality/value The study is novel in the sense that users' comment, rating and referral were taken into consideration to improve their overall search experience.
    Date
    20. 1.2015 18:30:22
  2. Ravana, S.D.; Rajagopal, P.; Balakrishnan, V.: Ranking retrieval systems using pseudo relevance judgments (2015) 0.03
    0.026385307 = product of:
      0.052770615 = sum of:
        0.029088326 = weight(_text_:web in 2591) [ClassicSimilarity], result of:
          0.029088326 = score(doc=2591,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 2591, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2591)
        0.02368229 = product of:
          0.04736458 = sum of:
            0.04736458 = weight(_text_:22 in 2591) [ClassicSimilarity], result of:
              0.04736458 = score(doc=2591,freq=4.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.27358043 = fieldWeight in 2591, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2591)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose In a system-based approach, replicating the web would require large test collections, and judging the relevancy of all documents per topic in creating relevance judgment through human assessors is infeasible. Due to the large amount of documents that requires judgment, there are possible errors introduced by human assessors because of disagreements. The paper aims to discuss these issues. Design/methodology/approach This study explores exponential variation and document ranking methods that generate a reliable set of relevance judgments (pseudo relevance judgments) to reduce human efforts. These methods overcome problems with large amounts of documents for judgment while avoiding human disagreement errors during the judgment process. This study utilizes two key factors: number of occurrences of each document per topic from all the system runs; and document rankings to generate the alternate methods. Findings The effectiveness of the proposed method is evaluated using the correlation coefficient of ranked systems using mean average precision scores between the original Text REtrieval Conference (TREC) relevance judgments and pseudo relevance judgments. The results suggest that the proposed document ranking method with a pool depth of 100 could be a reliable alternative to reduce human effort and disagreement errors involved in generating TREC-like relevance judgments. Originality/value Simple methods proposed in this study show improvement in the correlation coefficient in generating alternate relevance judgment without human assessors while contributing to information retrieval evaluation.
    Date
    20. 1.2015 18:30:22
    18. 9.2018 18:22:56
  3. Ravana, S.D.; Taheri, M.S.; Rajagopal, P.: Document-based approach to improve the accuracy of pairwise comparison in evaluating information retrieval systems (2015) 0.02
    0.022917118 = product of:
      0.045834236 = sum of:
        0.029088326 = weight(_text_:web in 2587) [ClassicSimilarity], result of:
          0.029088326 = score(doc=2587,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 2587, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2587)
        0.01674591 = product of:
          0.03349182 = sum of:
            0.03349182 = weight(_text_:22 in 2587) [ClassicSimilarity], result of:
              0.03349182 = score(doc=2587,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.19345059 = fieldWeight in 2587, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2587)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose The purpose of this paper is to propose a method to have more accurate results in comparing performance of the paired information retrieval (IR) systems with reference to the current method, which is based on the mean effectiveness scores of the systems across a set of identified topics/queries. Design/methodology/approach Based on the proposed approach, instead of the classic method of using a set of topic scores, the documents level scores are considered as the evaluation unit. These document scores are the defined document's weight, which play the role of the mean average precision (MAP) score of the systems as a significance test's statics. The experiments were conducted using the TREC 9 Web track collection. Findings The p-values generated through the two types of significance tests, namely the Student's t-test and Mann-Whitney show that by using the document level scores as an evaluation unit, the difference between IR systems is more significant compared with utilizing topic scores. Originality/value Utilizing a suitable test collection is a primary prerequisite for IR systems comparative evaluation. However, in addition to reusable test collections, having an accurate statistical testing is a necessity for these evaluations. The findings of this study will assist IR researchers to evaluate their retrieval systems and algorithms more accurately.
    Date
    20. 1.2015 18:30:22
  4. Rajagopal, P.; Ravana, S.D.; Koh, Y.S.; Balakrishnan, V.: Evaluating the effectiveness of information retrieval systems using effort-based relevance judgment (2019) 0.00
    0.0041864775 = product of:
      0.01674591 = sum of:
        0.01674591 = product of:
          0.03349182 = sum of:
            0.03349182 = weight(_text_:22 in 5287) [ClassicSimilarity], result of:
              0.03349182 = score(doc=5287,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.19345059 = fieldWeight in 5287, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5287)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    20. 1.2015 18:30:22