Search (2 results, page 1 of 1)

  • × author_ss:"Scholer, F."
  • × author_ss:"Wu, M."
  1. Wu, M.; Hawking, D.; Turpin, A.; Scholer, F.: Using anchor text for homepage and topic distillation search tasks (2012) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 257) [ClassicSimilarity], result of:
              0.008285859 = score(doc=257,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 257, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=257)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Past work suggests that anchor text is a good source of evidence that can be used to improve web searching. Two approaches for making use of this evidence include fusing search results from an anchor text representation and the original text representation based on a document's relevance score or rank position, and combining term frequency from both representations during the retrieval process. Although these approaches have each been tested and compared against baselines, different evaluations have used different baselines; no consistent work enables rigorous cross-comparison between these methods. The purpose of this work is threefold. First, we survey existing fusion methods of using anchor text in search. Second, we compare these methods with common testbeds and web search tasks, with the aim of identifying the most effective fusion method. Third, we try to correlate search performance with the characteristics of a test collection. Our experimental results show that the best performing method in each category can significantly improve search results over a common baseline. However, there is no single technique that consistently outperforms competing approaches across different collections and search tasks.
    Type
    a
  2. Wu, M.; Turpin, A.; Thom, J.A.; Scholer, F.; Wilkinson, R.: Cost and benefit estimation of experts' mediation in an enterprise search (2014) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 1186) [ClassicSimilarity], result of:
              0.008285859 = score(doc=1186,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 1186, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1186)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The success of an enterprise information retrieval system is determined by interactions among three key entities: the search engine employed; the service provider who delivers, modifies, and maintains the engine; and the users of the service within the organization. Evaluations of an enterprise search have predominately focused on the effectiveness and efficiency of the engine, with very little analysis of user involvement in the process, and none on the role of service providers. We propose and evaluate a model of costs and benefits to a service provider when investing in enhancements to the ranking of documents returned by their search engine. We demonstrate the model through a case study to analyze the potential impact of using domain experts to provide enhanced mediated search results. By demonstrating how to quantify the cost and benefit of an improved information retrieval system to the service provider, our case study shows that using the relevance assessments of domain experts to rerank original search results can significantly improve the accuracy of ranked lists. Moreover, the service provider gains substantial return on investment and a higher search success rate by investing in the relevance assessments of domain experts. Our cost and benefit analysis results are contrasted with standard modes of effectiveness analysis, including quantitative (using measures such as precision) and qualitative (through user preference surveys) approaches. Modeling costs and benefits explicitly can provide useful insights that the other approaches do not convey.
    Type
    a