Changes between Version 1 and Version 2 of Publications


Ignore:
Timestamp:
Apr 11, 2008, 6:05:14 PM (17 years ago)
Author:
Paul Brossier
Comment:

added Amaury and You papers

Legend:

Unmodified
Added
Removed
Modified
  • Publications

    v1 v2  
    1010P. Brossier, J. P. Bello and M. D. Plumbley. [http://aubio.org/articles/brossier04fastnotes.pdf Fast labelling of notes in music signals], in ''Proceedings of the 5th International Conference on Music Information Retrieval'' (ISMIR 2004), Barcelona, Spain, October 10-14, 2004.
    1111    ''Abstract'': We present a new system for the estimation of note attributes from a live monophonic music source, within a short time delay and without any previous knowledge of the signal. The labelling is based on the temporal segmentation and the successive estimation of the fundamental frequency of the current note object. The setup, implemented around a small C library, is directed at the robust note segmentation of a variety of audio signals. A system for evaluation of performances is also presented. The further extension to polyphonic signals is considered, as well as design concerns such as portability and integration in other software environments.
     12
     13== Co-authored papers ==
     14
     15A. Hazan, P. Brossier, P. Holonowicz, P. Herrera, and H. Purwins. [http://mtg.upf.edu/publications/100be8-ICMC07-Hazan.pdf Expectation Along The Beat: A Use Case For Music Expectation Models], in ''Proceedings of International Computer Music Conference 2007'', Copenhagen, Denmark, p. 228-236, 2007.
     16    ''Abstract'': We present a system to produce expectations based on the observation of a rhythmic music signals at a constant tempo. The algorithms we use are causal, in order be fit closer to cognitive constraints and allow a future real-time implementation. In a first step, an acoustic front-end based on the aubio library extracts onsets and beats from the incoming signal. The extracted onsets are then encoded in a symbolic way using an unsupervised scheme: each hit is assigned a timbre cluster based on its timbre features, while its inter-onset interval regarding the previous hit is computed as a proportion of the extracted tempo period and assigned an inter-onset interval cluster. In a later step, the representation of each hit is sent to an expectation module, which learns the statistics of the symbolic sequence. Hence, at each musical hit, the system produces both what and when expectations regarding the next musical hit. For evaluating our system, we consider a weighted average F-measure, that takes into account the uncertainty associated with the unsupervised encoding of the musical sequence. We then present a preliminary experiment involving generated musical material and propose a roadmap in the context of this novel application field.
     17
     18A. Hazan, P. Brossier, R. Marxer, and H. Purwins. [http://mtg.upf.edu/publications/79ee88-NIPS-2007-ahazan.pdf What/when causal expectation modelling in monophonic pitched and percussive audio], in ''Music, Brain and Cognition. Part 2: Models of Sound and Cognition, part of the Neural Information Processing Conference (NIPS)'', Vancouver, Canada, 2007.
     19    ''Abstract'': A causal system for representing a musical stream and generating further expected events is presented. Starting from an auditory front-end which extracts low-level (e.g. spectral shape, MFCC, pitch) and mid-level features such as onsets and beats, an unsupervised clustering process builds and maintains a set of symbols aimed at representing musical stream events using both timbre and time descriptions. The time events are represented using inter-onset intervals relative to the beats. These symbols are then processed by an expectation module based on Predictive Partial Match, a multiscale technique based on N-grams. To characterise the system capacity to generate an expectation that matches its transcription, we use a weighted average F-measure, that takes into account the uncertainty associated with the unsupervised encoding of the musical sequence. The potential of the system is demonstrated in the case of processing audio streams which contain drum loops or monophonic singing voice. In preliminary experiments, we show that the induced representation is useful for generating expectation patterns in a causal way. During exposure, we observe a globally decreasing prediction entropy combined with structure-specific variations.
     20
     21== Other Contributions ==
     22
     23W. You and R. B. Dannenberg [http://ismir2007.ismir.net/proceedings/ISMIR2007_p279_you.pdf Polyphonic Music Note Onset Detection Using Semi-Supervised Learning], in ''Proceedings of the 8th International Conference on Music Information Retrieval'' (ISMIR 2007), Vienna, Austria, September 23-27, 2007.
     24    ''Abstract'': Automatic note onset detection is particularly difficult in orchestral music (and polyphonic music in general). Machine learning offers one promising approach, but it is lim- ited by the availability of labeled training data. Score-to-audio alignment, however, offers an economical way to locate onsets in recorded audio, and score data is freely available for many orchestral works in the form of standard MIDI files. Thus, large amounts of training data can be generated quickly, but it is limited by the accuracy of the alignment, which in turn is ultimately related to the problem of onset detection. Semi-supervised or bootstrapping techniques can be used to iteratively refine both onset detection functions and the data used to train the functions. We show that this approach can be used to improve and adapt a general purpose onset detection algorithm for use with orchestral music.
     25