In games, films and virtual reality, we recreate the sounds around us, or create unreal sounds to evoke emotions and capture the imagination. But that creation relies on the use of pre-recorded samples. This can lead to repetition, and adaptation is limited to post-processing on those samples. This lack of plausible, generative sound often limits the richness and realism of the auditory scene.
But with procedural audio approaches, sound is synthesised in real-time and adapts to changing controls. The question arises; is procedural audio just an approach that can be applied for specialist uses with select sounds, or can all sounds in a large production be procedurally generated? Nemisindo is a start-up built around this vision. They aim to create almost any sound effect from scratch, in real-time, based on intuitive controls guided by the user. This talk will give an overview of the field, explanations and demonstrations of the technology, and a discussion of the challenges and directions currently being explored.
Josh Reiss is a Professor with the Centre for Digital Music at Queen Mary University of London. He has published more than 200 scientific papers, and co-authored 3 books in the audio field. He is the President-Elect and a Fellow of the Audio Engineering Society (AES). He co-founded the highly successful spin-out company, LandR, and recently formed the start-ups, Tonz and Nemisindo. His primary focus of research is on the use of state-of-the-art signal processing techniques for sound design and audio production. He maintains a popular blog, YouTube channel and twitter feed for scientific education and dissemination of research activities.