Search (6 results, page 1 of 1)

  • × author_ss:"Downie, J.S."
  1. Hu, X.; Lee, J.H.; Bainbridge, D.; Choi, K.; Organisciak, P.; Downie, J.S.: ¬The MIREX grand challenge : a framework of holistic user-experience evaluation in music information retrieval (2017) 0.02
    0.015921833 = product of:
      0.07164825 = sum of:
        0.016802425 = weight(_text_:of in 3321) [ClassicSimilarity], result of:
          0.016802425 = score(doc=3321,freq=14.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2742677 = fieldWeight in 3321, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3321)
        0.054845825 = weight(_text_:systems in 3321) [ClassicSimilarity], result of:
          0.054845825 = score(doc=3321,freq=10.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.45554203 = fieldWeight in 3321, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=3321)
      0.22222222 = coord(2/9)
    
    Abstract
    Music Information Retrieval (MIR) evaluation has traditionally focused on system-centered approaches where components of MIR systems are evaluated against predefined data sets and golden answers (i.e., ground truth). There are two major limitations of such system-centered evaluation approaches: (a) The evaluation focuses on subtasks in music information retrieval, but not on entire systems and (b) users and their interactions with MIR systems are largely excluded. This article describes the first implementation of a holistic user-experience evaluation in MIR, the MIREX Grand Challenge, where complete MIR systems are evaluated, with user experience being the single overarching goal. It is the first time that complete MIR systems have been evaluated with end users in a realistic scenario. We present the design of the evaluation task, the evaluation criteria and a novel evaluation interface, and the data-collection platform. This is followed by an analysis of the results, reflection on the experience and lessons learned, and plans for future directions.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.1, S.97-112
  2. Hu, X.; Choi, K.; Downie, J.S.: ¬A framework for evaluating multimodal music mood classification (2017) 0.01
    0.012897649 = product of:
      0.05803942 = sum of:
        0.015556021 = weight(_text_:of in 3354) [ClassicSimilarity], result of:
          0.015556021 = score(doc=3354,freq=12.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.25392252 = fieldWeight in 3354, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3354)
        0.042483397 = weight(_text_:systems in 3354) [ClassicSimilarity], result of:
          0.042483397 = score(doc=3354,freq=6.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.35286134 = fieldWeight in 3354, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=3354)
      0.22222222 = coord(2/9)
    
    Abstract
    This research proposes a framework for music mood classification that uses multiple and complementary information sources, namely, music audio, lyric text, and social tags associated with music pieces. This article presents the framework and a thorough evaluation of each of its components. Experimental results on a large data set of 18 mood categories show that combining lyrics and audio significantly outperformed systems using audio-only features. Automatic feature selection techniques were further proved to have reduced feature space. In addition, the examination of learning curves shows that the hybrid systems using lyrics and audio needed fewer training samples and shorter audio clips to achieve the same or better classification accuracies than systems using lyrics or audio singularly. Last but not least, performance comparisons reveal the relative importance of audio and lyric features across mood categories.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.2, S.273-285
  3. Downie, J.S.: ¬The MusiFind Music Information Retrieval project, phase III : evaluation of indexing options (1995) 0.01
    0.0115656955 = product of:
      0.05204563 = sum of:
        0.023429861 = weight(_text_:of in 2557) [ClassicSimilarity], result of:
          0.023429861 = score(doc=2557,freq=20.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.38244802 = fieldWeight in 2557, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2557)
        0.028615767 = weight(_text_:systems in 2557) [ClassicSimilarity], result of:
          0.028615767 = score(doc=2557,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.23767869 = fieldWeight in 2557, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2557)
      0.22222222 = coord(2/9)
    
    Abstract
    Continues the ongoing development of the MusiFind Music INFormation Retrieval Project. Musical indexing options were examined through the use of of term distributions and retrieval experiments. The most important factor determining retrieval precision is the classification scheme chosen to create the melodic segment indexes. Both the number of elements within a classification scheme and the definitons of those elements were found to be influential. Eliminates several indexing options from future consideration and suggests new research questions
    Imprint
    Alberta : Alberta University, School of Library and Information Studies
    Source
    Connectedness: information, systems, people, organizations. Proceedings of CAIS/ACSI 95, the proceedings of the 23rd Annual Conference of the Canadian Association for Information Science. Ed. by Hope A. Olson and Denis B. Ward
  4. Downie, J.S.: Music information retrieval (2002) 0.01
    0.0094423 = product of:
      0.04249035 = sum of:
        0.017962547 = weight(_text_:of in 4287) [ClassicSimilarity], result of:
          0.017962547 = score(doc=4287,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2932045 = fieldWeight in 4287, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4287)
        0.0245278 = weight(_text_:systems in 4287) [ClassicSimilarity], result of:
          0.0245278 = score(doc=4287,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.2037246 = fieldWeight in 4287, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.046875 = fieldNorm(doc=4287)
      0.22222222 = coord(2/9)
    
    Abstract
    Imagine a world where you walk up to a computer and sing the song fragment that has been plaguing you since breakfast. The computer accepts your off-key singing, corrects your request, and promptly suggests to you that "Camptown Races" is the cause of your irritation. You confirm the computer's suggestion by listening to one of the many MP3 files it has found. Satisfied, you kindly decline the offer to retrieve all extant versions of the song, including a recently released Italian rap rendition and an orchestral score featuring a bagpipe duet. Does such a system exist today? No. Will it in the future? Yes. Will such a system be easy to produce? Most decidedly not. Myriad difficulties remain to be overcome before the creation, deployment, and evaluation of robust, large-scale, and content-based Music Information Retrieval (MIR) systems become reality. The dizzyingly complex interaction of music's pitch, temporal, harmonic, timbral, editorial, textual, and bibliographic "facets," for example, demonstrates just one of MIR's perplexing problems. The choice of music representation-whether symbol-based, audio-based, or both-further compounds matters, as each choice determines bandwidth, computation, storage, retrieval, and interface requirements and capabilities.
    Source
    Annual review of information science and technology. 37(2003), S.295-342
  5. Downie, J.S.: ¬A sample of music information retrieval approaches (2004) 0.01
    0.008145894 = product of:
      0.03665652 = sum of:
        0.020304654 = weight(_text_:of in 3056) [ClassicSimilarity], result of:
          0.020304654 = score(doc=3056,freq=46.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.33143494 = fieldWeight in 3056, product of:
              6.78233 = tf(freq=46.0), with freq of:
                46.0 = termFreq=46.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=3056)
        0.016351866 = weight(_text_:systems in 3056) [ClassicSimilarity], result of:
          0.016351866 = score(doc=3056,freq=2.0), product of:
            0.12039685 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03917671 = queryNorm
            0.1358164 = fieldWeight in 3056, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.03125 = fieldNorm(doc=3056)
      0.22222222 = coord(2/9)
    
    Abstract
    In this Perspectives edition of the Journal of the American Society for Information Science and Technology, we present articles specifically written as introductory overviews of ten important music information retrieval (MIR) research and development projects. As an MIR researcher myself, I am continuously awestruck by the multinational and multidisciplinary nature of MIR research. In this brief overview edition alone, we have authors from Australia, Canada, France, Great Britain, Italy, New Zealand, Poland, United States, Spain, and Taiwan. The disciplines represented both in this issue and in the broader MIR community include: general, music, and digital librarianship; computer science; audio engineering; signal processing; traditional information retrieval; musicology, music theory, and music psychology; the recording and music distribution Business; human-computer interaction; and intellectual property law. Uniting this seemingly disparate aggregation is the common goal of providing the kind of robust access to the worid's vast store of music-in all its varied forms (i.e., audio, symbolic, and metadata)-that we currently provide for textual materials. Yes, some MIR research teams do have visions of establishing the musical equivalent of Google.com. The enormous magnitude of the potential user base is too enticing not to be seduced by the monetary rewards associated with the successful launching of a Web-based MIR service. Beyond dreams of unlimited wealth, however, 1 believe most MIR research teams are motivated by two primary factors: (1) a basic love of music and (2) an overwhelming intellectual need to overcome the myriad difficulties posed by the inherent complexity of music. About the former factor, I have little to add but to say most, if not all, of the MIR researchers I have had the good fortune to meet have had long histories of amateur and professional music perfor mance and/or scholarship. About the latter factor, I devote the following paragraph. Music information is a multifaceted amalgam that includes pitch, temporal (i.e., rhythm), harmonic, textual (i.e., lyrics, etc.), timbral (e.g., orchestration), editorial, and metadata elements. Music information is also extremely plastic. That is, any given work can have its specific pitches altered, its rhythms modified, its harmonies reset, its orchestration changed, its performances reinterpreted, and its Performers arbitrarily chosen; yet, somehow, it remains the "same" piece of music as the "original." Because music information queries are founded in the same materials as music information, their creation and interpretation are also extremely plastic. Within this extraordinarily fluid environment, notions of "similarity" become particularly problematic. The problems associated with similarity in turn lead to difficulties in creating meaningful and useful outputs from MIR systems in response to the wide variety of potential music queries that will be submitted to them. Thus, the grand intellectual challenge facing past, present, and future MIR research is the acquisition of a fundamental understanding of music information itself.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.12, S.1033-1036
  6. Organisciak, P.; Schmidt, B.M.; Downie, J.S.: Giving shape to large digital libraries through exploratory data analysis (2022) 0.00
    0.0019958385 = product of:
      0.017962547 = sum of:
        0.017962547 = weight(_text_:of in 473) [ClassicSimilarity], result of:
          0.017962547 = score(doc=473,freq=16.0), product of:
            0.061262865 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03917671 = queryNorm
            0.2932045 = fieldWeight in 473, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=473)
      0.11111111 = coord(1/9)
    
    Abstract
    The emergence of large multi-institutional digital libraries has opened the door to aggregate-level examinations of the published word. Such large-scale analysis offers a new way to pursue traditional problems in the humanities and social sciences, using digital methods to ask routine questions of large corpora. However, inquiry into multiple centuries of books is constrained by the burdens of scale, where statistical inference is technically complex and limited by hurdles to access and flexibility. This work examines the role that exploratory data analysis and visualization tools may play in understanding large bibliographic datasets. We present one such tool, HathiTrust+Bookworm, which allows multifaceted exploration of the multimillion work HathiTrust Digital Library, and center it in the broader space of scholarly tools for exploratory data analysis.
    Source
    Journal of the Association for Information Science and Technology. 73(2022) no.2, S.317-332