- Home /
Build Audio DSP plug-ins for beat matching(time scaling)
Hi Everyone,
I'm a audio programming student and current working on a DJ game in Unity. I used CsoundUnity for the beat matching part but there are tons of problems with Csound. Here's the thing, if you are also a audio programmer you must know that beat matching is all about time scaling (either use a phase vocoding or granular process), so the process must happens in the middle after AudioClip of the AudioSource starts feeding buffers and before the AudioSource component output, which means using C++ to build the Unity audio native plug-in for this type of process won't work (the native plug-ins happens even after the audio mixer attenuation module!)
So is that possible to get access to the AudioClip of and Audio Source component and manipulate the audio buffers/signal from it, adding our external C++ DSP codes before it outputs back to the output component of the AudioSource? Do we have to or can we get access to the original C++ Unity audio engine codes to achieve this process?
To make the question clear, I shall draw a chart of signal flow:
Native Audio Plug-in: AudioClip -> AudioSourceOutput -> AudioMixerAttenuationFader ->OurCustomNativePlugIn -> Master
What I might have to do: AudioClip -> DSP process happens here -> AudioSourceOutput -> AudioMixerAttenuationFader -> Master
But feel free to correct me since I'm a new bee, it would be awesome if someone in the Unity audio team who build the engine can answer me.
Thanks!
Your answer
Follow this Question
Related Questions
Need help compiling Unity Native Audio Plugins for Linux, got "multiple definition of" errors. 0 Answers
How do I check, if an Audio Mixer has a certain Parameter exposed? 0 Answers
Using a fixed size pointer in Unity C# with a C++ external function 0 Answers
Native Plugin works in editor but not in build 0 Answers
VST as a build target 0 Answers