Search (2 results, page 1 of 1)

  • × author_ss:"Downie, J.S."
  • × author_ss:"Hu, X."
  • × year_i:[2010 TO 2020}
  1. Hu, X.; Lee, J.H.; Bainbridge, D.; Choi, K.; Organisciak, P.; Downie, J.S.: ¬The MIREX grand challenge : a framework of holistic user-experience evaluation in music information retrieval (2017) 0.01
    0.0061772587 = product of:
      0.043240808 = sum of:
        0.012107591 = weight(_text_:information in 3321) [ClassicSimilarity], result of:
          0.012107591 = score(doc=3321,freq=8.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.23274569 = fieldWeight in 3321, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3321)
        0.031133216 = weight(_text_:retrieval in 3321) [ClassicSimilarity], result of:
          0.031133216 = score(doc=3321,freq=6.0), product of:
            0.08963835 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.029633347 = queryNorm
            0.34732026 = fieldWeight in 3321, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3321)
      0.14285715 = coord(2/14)
    
    Abstract
    Music Information Retrieval (MIR) evaluation has traditionally focused on system-centered approaches where components of MIR systems are evaluated against predefined data sets and golden answers (i.e., ground truth). There are two major limitations of such system-centered evaluation approaches: (a) The evaluation focuses on subtasks in music information retrieval, but not on entire systems and (b) users and their interactions with MIR systems are largely excluded. This article describes the first implementation of a holistic user-experience evaluation in MIR, the MIREX Grand Challenge, where complete MIR systems are evaluated, with user experience being the single overarching goal. It is the first time that complete MIR systems have been evaluated with end users in a realistic scenario. We present the design of the evaluation task, the evaluation criteria and a novel evaluation interface, and the data-collection platform. This is followed by an analysis of the results, reflection on the experience and lessons learned, and plans for future directions.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.1, S.97-112
  2. Hu, X.; Choi, K.; Downie, J.S.: ¬A framework for evaluating multimodal music mood classification (2017) 0.00
    6.115257E-4 = product of:
      0.00856136 = sum of:
        0.00856136 = weight(_text_:information in 3354) [ClassicSimilarity], result of:
          0.00856136 = score(doc=3354,freq=4.0), product of:
            0.052020688 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.029633347 = queryNorm
            0.16457605 = fieldWeight in 3354, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3354)
      0.071428575 = coord(1/14)
    
    Abstract
    This research proposes a framework for music mood classification that uses multiple and complementary information sources, namely, music audio, lyric text, and social tags associated with music pieces. This article presents the framework and a thorough evaluation of each of its components. Experimental results on a large data set of 18 mood categories show that combining lyrics and audio significantly outperformed systems using audio-only features. Automatic feature selection techniques were further proved to have reduced feature space. In addition, the examination of learning curves shows that the hybrid systems using lyrics and audio needed fewer training samples and shorter audio clips to achieve the same or better classification accuracies than systems using lyrics or audio singularly. Last but not least, performance comparisons reveal the relative importance of audio and lyric features across mood categories.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.2, S.273-285