Search (2 results, page 1 of 1)

  • × author_ss:"Cavedon, L."
  1. Spina, D.; Trippas, J.R.; Cavedon, L.; Sanderson, M.: Extracting audio summaries to support effective spoken document search (2017) 0.00
    0.002269176 = product of:
      0.004538352 = sum of:
        0.004538352 = product of:
          0.009076704 = sum of:
            0.009076704 = weight(_text_:a in 3788) [ClassicSimilarity], result of:
              0.009076704 = score(doc=3788,freq=10.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.1709182 = fieldWeight in 3788, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3788)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We address the challenge of extracting query biased audio summaries from podcasts to support users in making relevance decisions in spoken document search via an audio-only communication channel. We performed a crowdsourced experiment that demonstrates that transcripts of spoken documents created using Automated Speech Recognition (ASR), even with significant errors, are effective sources of document summaries or "snippets" for supporting users in making relevance judgments against a query. In particular, the results show that summaries generated from ASR transcripts are comparable, in utility and user-judged preference, to spoken summaries generated from error-free manual transcripts of the same collection. We also observed that content-based audio summaries are at least as preferred as synthesized summaries obtained from manually curated metadata, such as title and description. We describe a methodology for constructing a new test collection, which we have made publicly available.
    Type
    a
  2. Macias-Galindo, D.; Cavedon, L.; Thangarajah, J.; Wong, W.: Effects of domain on measures of semantic relatedness (2015) 0.00
    0.0020714647 = product of:
      0.0041429293 = sum of:
        0.0041429293 = product of:
          0.008285859 = sum of:
            0.008285859 = weight(_text_:a in 2220) [ClassicSimilarity], result of:
              0.008285859 = score(doc=2220,freq=12.0), product of:
                0.053105544 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046056706 = queryNorm
                0.15602624 = fieldWeight in 2220, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2220)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Measures of semantic relatedness have been used in a variety of applications in information retrieval and language technology, such as measuring document similarity and cohesion of text. Definitions of such measures have ranged from using distance-based calculations over WordNet or other taxonomies to statistical distributional metrics over document collections such as Wikipedia or the Web. Existing measures do not explicitly consider the domain associations of terms when calculating relatedness: This article demonstrates that domain matters. We construct a data set of pairs of terms with associated domain information and extract pairs that are scored nearly identical by a sample of existing semantic-relatedness measures. We show that human judgments reliably score those pairs containing terms from the same domain as significantly more related than cross-domain pairs, even though the semantic-relatedness measures assign the pairs similar scores. We provide further evidence for this result using a machine learning setting by demonstrating that domain is an informative feature when learning a metric. We conclude that existing relatedness measures do not account for domain in the same way or to the same extent as do human judges.
    Type
    a