SPECIAL REPORT: We go under the hood with the Wwise creator's new dynamic synthesis technology
Audiokinetic has announced a new product, SoundSeed, at the London stop of its Wwise tour.
SoundSeed is a family of interactive sound generators that enables audio designers to use a single 'footprint' sample to generate unlimited variations using DSP technology.
"One of the challenges that still remains [for audio designers] is memory limitations," said Jacques Deveau, audio program manager for Audiokinetic. "There's still a ridiculously low amount of memory available for audio content. So, we thought that for the first iteration it would be good to create a plugin to help overcome those limitations by introducing a lot more variation with the SoundSeed technology.
"Secondly, on the creative side, it also allows you to exceed other limitations - if you're recording source sounds, you're obviouisly limited to physical sounds that you can capture, but once you capture those into SourceSeed you can modify the properties and really push the limits creatively."
The technology currently functions as a plug-in to Audiokinetic's Wwise sound engine, although the company does forsee other possibilities for the tech. "There's basically two parts - the modeler tool is an external tool that generates content, and currently that's exclusive to Wwise, and then there's the runtime. But we also forsee other applications, other uses, beyond Wwise - but right now for the initial release it's an audio effect in Wwise," he said.
The first product to be launched under the SoundSeed banner is SoundSeed Impact, which is specifically geared towards impact sounds such as sword clashes and footsteps. Further specifically tailored modules are definitely on the agenda for Audiokinetic. "We forsee lots of different types of modules down the road as well," Deveau continues. "It's technology that we're developing, and the advantage we have is that we can research and develop technology that's specific for the type of synthesis that we want to do.
"We have the flexibility to pick the technology to meet the requirements. I'm not exactly sure about the frequency, but we definitely want a couple of modules available, and it'l be relatively quick - it's not going to be years between releases."
Explaining how the technology works, Deveau talked about how the Modeler analyses source sound files for 'resonant modes' - imagine the pure tone from hitting a wine glass with your finger - and extracting them from the file. "It then creates a model which provides you with the ability to recreate that. So the output of the modeller is what we call a residual sound - all the sound with the resonant content removed, essentially just noise, and a data model.
"That's what you load into Wwise at runtime: you load the residual file, apply the sound sheet, load the model, and then you have the ability to transform those modes in frequency and magnitude. That's what you're doing in runtime to create the variations, you're modifying that modal information," he added.
And while the tech was only revealed to the public today, one UK studio has been testing it out already, with great results. "Realtime Worlds' original feedback was great - they said it delivered on the promise of reducing the memory footprint. They had a lot of variations that were taking up a lot of memory, so they were really happy to use it. They were really excited to get on board early and try it out, and they've been really happy with it so far."