• Media type: E-Article
  • Title: Timbre representation for automatic classification of musical instruments
  • Contributor: Kostek, Bozena
  • Published: Acoustical Society of America (ASA), 2006
  • Published in: The Journal of the Acoustical Society of America, 120 (2006) 5_Supplement, Seite 3276-3277
  • Language: English
  • DOI: 10.1121/1.4777250
  • ISSN: 0001-4966; 1520-8524
  • Keywords: Acoustics and Ultrasonics ; Arts and Humanities (miscellaneous)
  • Origination:
  • Footnote:
  • Description: Human communication includes the capability of recognition. This is particularly true of auditory communication. Music information retrieval (MIR) turns out to be particularly challenging, since many problems remain still unsolved. Topics that should be included within the scope of MIR are automatic classification of musical instruments/phrases/styles, music representation and indexing, estimating musical similarity using both perceptual and musicological criteria, recognizing music using audio and/or semantic description, language modeling for music, auditory scene analysis, and others. Many features of music content description are based on perceptual phenomena and cognition. However, it can easily be observed that most of the low-level descriptors used, for example, in musical instrument classification are more data- than human-oriented. This is because the idea behind these features is to have data defined and linked in such a way as to be able to use it for more effective automatic discovery, integration, and reuse in various applications. The ambitious task is, however, to provide seamless meaning to low- and high-level descriptors such as timbre descriptors and linking them together. In such a way data can be processed and shared by both systems and people. This paper presents a study related to timbre representation of musical instrument sounds.