Search (2 results, page 1 of 1)

  • × author_ss:"Wacholder, N."
  • × year_i:[2010 TO 2020}
  1. Liu, Y.-H.; Wacholder, N.: Evaluating the impact of MeSH (Medical Subject Headings) terms on different types of searchers (2017) 0.00
    0.0019955188 = product of:
      0.011973113 = sum of:
        0.011973113 = weight(_text_:in in 5096) [ClassicSimilarity], result of:
          0.011973113 = score(doc=5096,freq=10.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.20163295 = fieldWeight in 5096, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.046875 = fieldNorm(doc=5096)
      0.16666667 = coord(1/6)
    
    Abstract
    A commonly used technique for improving search engine performance is result caching. In result caching, precomputed results (e.g., URLs and snippets of best matching pages) of certain queries are stored in a fast-access storage. The future occurrences of a query whose results are already stored in the cache can be directly served by the result cache, eliminating the need to process the query using costly computing resources. Although other performance metrics are possible, the main performance metric for evaluating the success of a result cache is hit rate. In this work, we present a machine learning approach to improve the hit rate of a result cache by facilitating a large number of features extracted from search engine query logs. We then apply the proposed machine learning approach to static, dynamic, and static-dynamic caching. Compared to the previous methods in the literature, the proposed approach improves the hit rate of the result cache up to 0.66%, which corresponds to 9.60% of the potential room for improvement.
  2. Muresan, S.; Gonzalez-Ibanez, R.; Ghosh, D.; Wacholder, N.: Identification of nonliteral language in social media : a case study on sarcasm (2016) 0.00
    0.0012881019 = product of:
      0.007728611 = sum of:
        0.007728611 = weight(_text_:in in 3155) [ClassicSimilarity], result of:
          0.007728611 = score(doc=3155,freq=6.0), product of:
            0.059380736 = queryWeight, product of:
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.043654136 = queryNorm
            0.1301535 = fieldWeight in 3155, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.3602545 = idf(docFreq=30841, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3155)
      0.16666667 = coord(1/6)
    
    Abstract
    With the rapid development of social media, spontaneously user-generated content such as tweets and forum posts have become important materials for tracking people's opinions and sentiments online. A major hurdle for current state-of-the-art automatic methods for sentiment analysis is the fact that human communication often involves the use of sarcasm or irony, where the author means the opposite of what she/he says. Sarcasm transforms the polarity of an apparently positive or negative utterance into its opposite. Lack of naturally occurring utterances labeled for sarcasm is one of the key problems for the development of machine-learning methods for sarcasm detection. We report on a method for constructing a corpus of sarcastic Twitter messages in which determination of the sarcasm of each message has been made by its author. We use this reliable corpus to compare sarcastic utterances in Twitter to utterances that express positive or negative attitudes without sarcasm. We investigate the impact of lexical and pragmatic factors on machine-learning effectiveness for identifying sarcastic utterances and we compare the performance of machine-learning techniques and human judges on this task.