Search (7 results, page 1 of 1)

  • × author_ss:"Hu, X."
  • × year_i:[2010 TO 2020}
  1. Hu, X.; Lee, J.H.; Bainbridge, D.; Choi, K.; Organisciak, P.; Downie, J.S.: ¬The MIREX grand challenge : a framework of holistic user-experience evaluation in music information retrieval (2017) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 3321) [ClassicSimilarity], result of:
          0.014277775 = score(doc=3321,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 3321, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3321)
      0.25 = coord(1/4)
    
    Abstract
    Music Information Retrieval (MIR) evaluation has traditionally focused on system-centered approaches where components of MIR systems are evaluated against predefined data sets and golden answers (i.e., ground truth). There are two major limitations of such system-centered evaluation approaches: (a) The evaluation focuses on subtasks in music information retrieval, but not on entire systems and (b) users and their interactions with MIR systems are largely excluded. This article describes the first implementation of a holistic user-experience evaluation in MIR, the MIREX Grand Challenge, where complete MIR systems are evaluated, with user experience being the single overarching goal. It is the first time that complete MIR systems have been evaluated with end users in a realistic scenario. We present the design of the evaluation task, the evaluation criteria and a novel evaluation interface, and the data-collection platform. This is followed by an analysis of the results, reflection on the experience and lessons learned, and plans for future directions.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.1, S.97-112
  2. Hu, X.; Ng, J.; Xia, S.: User-Centered evaluation of metadata schema for nonmovable cultural heritage : murals and stone cave temples (2018) 0.00
    0.0033256328 = product of:
      0.013302531 = sum of:
        0.013302531 = weight(_text_:information in 4576) [ClassicSimilarity], result of:
          0.013302531 = score(doc=4576,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.21684799 = fieldWeight in 4576, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4576)
      0.25 = coord(1/4)
    
    Abstract
    Digitization provides a solution for documentation and preservation of nonmovable cultural heritages. Despite efforts for the preservation of cultural heritages around the world, no well-accepted metadata schema has been developed for murals and stone cave temples, which are often high-value heritages built in ancient times. In addition, the literature is scarce on the user-centered evaluation of metadata schemas of this kind. This study therefore aims to offer insights on developing and evaluating a metadata schema for organizing information of these historic and complex cultural heritages. In-depth interviews were conducted with a total of 30 users, including 18 professional and 12 public users, and interview transcripts were coded through a qualitative content analysis approach. Findings reveal the importance of specific metadata elements as perceived by the two groups of end users, which correlated with their cultural heritage information-seeking behaviors. In addition, the issues of standardization of cataloging of cultural heritage information and interoperability among metadata schemas have been raised by users for enhancing the user experience with digital platforms of cultural heritage information. The coding schema developed in this study can serve as a framework for follow-up evaluations of metadata schemas, contributing to the ongoing development of cultural heritage metadata.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.12, S.1476-1487
  3. Hu, X.; Kando, N.: Task complexity and difficulty in music information retrieval (2017) 0.00
    0.0025760243 = product of:
      0.010304097 = sum of:
        0.010304097 = weight(_text_:information in 3690) [ClassicSimilarity], result of:
          0.010304097 = score(doc=3690,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16796975 = fieldWeight in 3690, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3690)
      0.25 = coord(1/4)
    
    Abstract
    There has been little research on task complexity and difficulty in music information retrieval (MIR), whereas many studies in the text retrieval domain have found that task complexity and difficulty have significant effects on user effectiveness. This study aimed to bridge the gap by exploring i) the relationship between task complexity and difficulty; ii) factors affecting task difficulty; and iii) the relationship between task difficulty, task complexity, and user search behaviors in MIR. An empirical user experiment was conducted with 51 participants and a novel MIR system. The participants searched for 6 topics across 3 complexity levels. The results revealed that i) perceived task difficulty in music search is influenced by task complexity, user background, system affordances, and task uncertainty and enjoyability; and ii) perceived task difficulty in MIR is significantly correlated with effectiveness metrics such as the number of songs found, number of clicks, and task completion time. The findings have implications for the design of music search tasks (in research) or use cases (in system development) as well as future MIR systems that can detect task difficulty based on user effectiveness metrics.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.7, S.1711-1723
  4. Hu, X.; Choi, K.; Downie, J.S.: ¬A framework for evaluating multimodal music mood classification (2017) 0.00
    0.0025239778 = product of:
      0.010095911 = sum of:
        0.010095911 = weight(_text_:information in 3354) [ClassicSimilarity], result of:
          0.010095911 = score(doc=3354,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16457605 = fieldWeight in 3354, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3354)
      0.25 = coord(1/4)
    
    Abstract
    This research proposes a framework for music mood classification that uses multiple and complementary information sources, namely, music audio, lyric text, and social tags associated with music pieces. This article presents the framework and a thorough evaluation of each of its components. Experimental results on a large data set of 18 mood categories show that combining lyrics and audio significantly outperformed systems using audio-only features. Automatic feature selection techniques were further proved to have reduced feature space. In addition, the examination of learning curves shows that the hybrid systems using lyrics and audio needed fewer training samples and shorter audio clips to achieve the same or better classification accuracies than systems using lyrics or audio singularly. Last but not least, performance comparisons reveal the relative importance of audio and lyric features across mood categories.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.2, S.273-285
  5. Hu, X.; Yang, Y.-H.: ¬The mood of Chinese Pop music : representation and recognition (2017) 0.00
    0.0025239778 = product of:
      0.010095911 = sum of:
        0.010095911 = weight(_text_:information in 3755) [ClassicSimilarity], result of:
          0.010095911 = score(doc=3755,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16457605 = fieldWeight in 3755, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3755)
      0.25 = coord(1/4)
    
    Abstract
    Music mood recognition (MMR) has attracted much attention in music information retrieval research, yet there are few MMR studies that focus on non-Western music. In addition, little has been done on connecting the 2 most adopted music mood representation models: categorical and dimensional. To bridge these gaps, we constructed a new data set consisting of 818 Chinese Pop (C-Pop) songs, 3 complete sets of mood annotations in both representations, as well as audio features corresponding to 5 distinct categories of musical characteristics. The mood space of C-Pop songs was analyzed and compared to that of Western Pop songs. We also explored the relationship between categorical and dimensional annotations and the results revealed that one set of annotations could be reliably predicted by the other. Classification and regression experiments were conducted on the data set, providing benchmarks for future research on MMR of non-Western music. Based on these analyses, we reflect and discuss the implications of the findings to MMR research.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.8, S.1899-1910
  6. Hu, X.; Rousseau, R.; Chen, J.: ¬A new approach for measuring the value of patents based on structural indicators for ego patent citation networks (2012) 0.00
    0.0020821756 = product of:
      0.008328702 = sum of:
        0.008328702 = weight(_text_:information in 445) [ClassicSimilarity], result of:
          0.008328702 = score(doc=445,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.13576832 = fieldWeight in 445, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=445)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.9, S.1834-1842
  7. Hu, X.; Rousseau, R.: Do citation chimeras exist? : The case of under-cited influential articles suffering delayed recognition (2019) 0.00
    0.0017847219 = product of:
      0.0071388874 = sum of:
        0.0071388874 = weight(_text_:information in 5217) [ClassicSimilarity], result of:
          0.0071388874 = score(doc=5217,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.116372846 = fieldWeight in 5217, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5217)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 70(2019) no.5, S.499-508