Profiling Audio Effects with Deep Neural Networks
Date: 24th July 2019
Time: 18.30 – 20:30
Venue: Queen Mary University of London, Mile End Road, London, E1 4NS
Speaker: Dr Scott Hawley
One traditional method of modeling audio effects, amplifiers, analog gear, microphones, etc. is to build a detailed model which emulates the physical processes involved. An alternate approach is a ‘data-driven’, ‘model-agnostic’ approach, in which a large amount of audio inputs and outputs and a machine learning system such a neural network is used to approximate the same mapping of inputs to outputs in a “black box” manner. Such methods have been applied to reverberation, tube amplifiers, source separation, and a host of other applications. I’ll present some recent success in ‘profiling’ dynamic range compressors, whose combination of nonlinearity and time-dependence has made them difficult to capture. The result is still a bit noisy and slow, so we won’t be replacing traditional plugins just yet, but this method also allows for some easy creation of ‘inverse’ effects such as de-noising and de-compression, allowing the engineer to create new effects if sufficient input-output recordings are present. For a web demo and a link to the recent paper on this, see http://www.signaltrain.ml