- Home /
How to play an AudioClip using OnAudioFilterRead()?
I want to build a reverb filter, using OnAudioFilterRead().
Here (Google Drive) is a link to a project file with a scene that uses my current code.
In my code I have a method which looks at the environment, and creates a list of delay values based on the positions of objects relative to the player. However, my current structure results in audio glitches, which I think might be caused by the fact that the methods Reflections() and OnAudioFilterRead() are not being called in sync.
I am currently using an InvokeRepeating() to call Reflections(), however I'd much rather be able to call the latter at a specific moment relative to OnAudioFilterRead(). But, because OnAudioFilterRead() apparantly runs on a seperate thread, I can't call Reflections() from within OnAudioFilterRead().
If you know how to resolve my audio glitches, and perhaps can tell me something about their origin, I'd be very greatfull! :]
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
public class myReverb : MonoBehaviour
{
float[] myBuffer;
Collider[] myObjects;
List<int> myReflections = new List<int>();
Vector3 myPosition;
float myDistance;
int myDelay;
public float myRange;
public float myVolume;
void Start()
{
myBuffer = new float[40960];
for (int h = 0; h < 40960; h++)
{
myBuffer[h] = 0f;
}
//InvokeRepeating results in audio glitches, because it will not run synchronously with OnAudioFilterRead()...
InvokeRepeating ("Reflections", 0f, 0.01f);
}
public void Reflections()
{
//Clear List...
myReflections.Clear();
//Map environment based on current location...
myPosition = transform.position;
myObjects = Physics.OverlapSphere(myPosition, myRange);
foreach (Collider _Object in myObjects)
{
if (_Object.gameObject.GetComponent<myReverbManipulator>())
{
//Convert distance to time, expressed in a number of samples...
myReflections.Add((int)((Vector3.Distance(myPosition, _Object.gameObject.transform.position) / 340.29f) * 44100));
}
}
}
void OnAudioFilterRead(float[] data, int channels)
{
//Buffer actions...
for (int a = 0; a < 38912; a++)
{
myBuffer[a] = myBuffer[a + 2048];
}
for (int b = 0; b < 2048; b++)
{
myBuffer[38912 + b] = data[b];
}
//I'm not allowed to call a method from within OnAudioFilterRead()... :(
//Reflections();
//Insert reflections into audio...
for (int c = 0; c < myReflections.Count; c++)
{
myDelay = myReflections[c];
for (int d = 0; d < 2048; d++)
{
data[d] += myBuffer[38912 + d - myDelay] * myVolume;
}
}
}
}
You should do all you operations on the update()and only do
for (int d = 0; d < 2048; d++)
{
data[d] += myBuffer[38912 + d - myDelay] * myVolume;
}
in OnAudioFilterRead().
Also for the glitches, the problem may come from:
1) Synchronisation problem between invokeRepeating() and OnAudioFilterRead() : you can implement variables to be sure to read something only when your buffer is ready ( you can use two booleans for that).
2) continuity problem: Add an envelopp to the sound you want to delay to be sure it starts and ends with 0.
I am currently working on audio programmation so don't hesitate if you have any questions.
regards
I will try this when I get home!
If I have more questions, I'll let you know! :]
Thank you very much Nerevar!
In my previous code I let the AudioSource do the playing, but if I want to do the processing before the data is passed to OnAudioFilterRead(), I myself have to walk through the buffer, process, and then pass the data to OnAudioFilterRead(), right?
I figure I'd start by simply trying to play an AudioClip.
I have a float[] myBuffer which I filled using myAudioClip.GetData(). OnAudioFilterRead() runs through myBuffer and plays the values as audio. However, the pitch is currently too high on playback and, although the filter shows a processing time of < 1ms, moving the first person controller becomes very laggy.
What causes this? The audio file I am using is mono, 16bit, and has a 44100Hz sample rate. I am using a value of 2048 in OnAudioFilterRead(), the script is attached to a GameObject with an AudioSource that has its Audio Clip set to "none" and "Play On Awake" set to true.
EDIT: Lag is due to size of myBuffer[].
using UnityEngine;
using System.Collections;
public class myPlay : $$anonymous$$onoBehaviour
{
public AudioClip myAudioClip;
public float[] myBuffer;
public int _BufferCount;
int _N;
void Start()
{
_BufferCount = myAudioClip.samples;
_N = 0;
myBuffer = new float[_BufferCount];
myAudioClip.GetData(myBuffer, 0);
}
void OnAudioFilterRead(float[] data, int channels)
{
for (int d = 0; d < 2048; d++)
{
data[d] = myBuffer[_N];
_N++;
if (_N > _BufferCount)
{
_N = 0;
}
}
}
}
Answer by Nerevar · May 09, 2014 at 09:21 AM
Hi,
About the pitch problem: try to check AudioSettings.outputSampleRate to see if the value is matching your clip's (44100), set the right value if necessary.
you should try to add another boolean:
ReadyToWrite ( = _Buffer)
ReadyToRead ( to put in OnAudioFilterRead() before you pass myBuffer[] )
Also you might have noticed that (depending on what operations you are doing on audio data) you may have to separate the channels for treatment. For your information, if I remember: even ranks of myBuffer is left side, odd is right side .
Thank you! :]
I tried, but this does not change anything. Plus, can't I use one boolean for that? Write if _Buffer is true, and read if _Buffer is false?
Do you by any chance now how the size of data[] in OnAudioFilterRead() relates to the DSPBuffersize and outputSampleRate in AudioSettings?
Here's my current code. Is this wat you meant?:
using UnityEngine;
using System.Collections;
public class myPlay : $$anonymous$$onoBehaviour
{
public AudioSource myAudioSource;
public AudioClip myAudioClip;
float[] myBuffer;
int _N;
bool _Buffer;
void Start()
{
_N = 0;
AudioSettings.outputSampleRate = myAudioClip.frequency;
myBuffer = new float[2048];
myAudioClip.GetData(myBuffer, _N);
_Buffer = false;
myAudioSource.Play();
}
void Update()
{
if (_Buffer)
{
myAudioClip.GetData(myBuffer, _N);
_Buffer = false;
}
}
void OnAudioFilterRead(float[] data, int channels)
{
if (!_Buffer)
{
for (int a = 0; a < data.Length; a++)
{
data[a] = myBuffer[a];
}
_N += data.Length;
_Buffer = true;
}
}
}
Yes you can do it with one boolean, just meant to add a condition on passing data to OnAudioFilterRead(), you did right.
for info: the DSP buffersize is usualy a queue of 4*1024 floats: 2 1024 floats are read in parallel (1 for left and 1 for right side) After you pass the data[] of 2048 in OnAudioFilterRead, unity's DSP split the data in 2 1024 and add it to the output queue ( so you have a basic delay of 1024/outputSampleRate = 23ms.
You can also try this code to check information on audioSettings:
using UnityEngine;
using System.Collections;
public class DSPTest : $$anonymous$$onoBehaviour {
private int SizeOnAudio;
private int DSPSize;
private int QSize;
public int newQSize;
public int newDSPSize;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
AudioSettings.GetDSPBufferSize(out DSPSize,out QSize);
}
void OnAudioFilterRead(float[] data, int channels){
SizeOnAudio = data.Length;
}
void OnGUI(){
GUILayout.Label("driverCaps : " + AudioSettings.driverCaps +"\n" + "outputSampleRate : "+AudioSettings.outputSampleRate +"\n" + "speaker$$anonymous$$ode : " +AudioSettings.speaker$$anonymous$$ode + "\n"+ "DSPBufferSize : "+DSPSize+" * "+QSize );
GUILayout.Label("SizeOnAudio : "+SizeOnAudio);
// GUILayout.Label("DSPSize : "+DSPSize);
// GUILayout.Label("QSize : "+QSize);
if(GUILayout.Button(" Set DSP " ))AudioSettings.SetDSPBufferSize(newDSPSize,newQSize);
}
}
And you can also check your fps, they should be above 50 else you don't process data fast enough for the DSP.
http://wiki.unity3d.com/index.php?title=FramesPerSecond
What is in your clip by the way? I will try to make some more tests on OnAudioFilterRead and I get back to you.
Okay I checked your function, I don't have a single glitch but the pitch/speed doesn't match..it is too high indeed :p
This is a question that came to me last week: how to access the pitch/speed parameters (as in the AudioSource) without modifying the generalOutputSampleRate.
I actually posted the question here
Yep
Actually it is logical, as we said, in stereo you process 1024 samples in real length every update so your _N (offset)must be incremented by 1024 every time.
GetData() then (depending on the number of channels you clip has) is returning you the floats the same way as Data[] are passed to unity DSP.
exemple:
mono clip data: 0 1 2 3 2
GetData(myBuffer,offset)
myBuffer: 0 1 2 3 2
stereo clip data: 0 1 2 3 2
1 0 1 2 3
GetData(myBuffer,offset)
myBuffer: 0 1 1 0 2 1 3 2 2 3
to conclude:
if mono : myBuffer[2048] , step: 2048
if stereo: myBuffer[2048] , step: 1024
the size of myBuffer is restricted by the length of Data[] in OnAudioFilterRead so we just need to change the step.
if there are even more channels it's getting a bit more complicated ><
GetData() is a stupid function so it is up to the programmer to think and know what step he has to take to recover data correctly from the clip.
I think now you should be able to process all you need with GetData and OnAudioFilterRead()
Don't hesitate if you have more questions.
Answer by Nidre · Aug 23, 2015 at 01:19 PM
Haven't really read all of the comments but I have wrote a library that plays audio clips by using OnAudioFilterRead, just leaving it below if here if anyone finds it useful.
Nice man! That is exactly what i needed to achieve homogeneous machine gun sounds!
Thank you for sharing this!
However, the sounds sound very different from the original. i have to dig into your code i guess
Answer by reefwirrax · Dec 07, 2014 at 10:01 AM
Hi, Thanks for this great topic, as an audio programmer, i will try to summarise what i think is happening in OnAudioFilterRead.
It's a on a seperate thread because it is running on the soundcard, or in soundcard time, So in fact it isnt a very programmeable function. The soundcard can only read and play data. trying to actually process DSP in this thread will cause bugs, i.e. reading writing the data to an external filter code.
The soundcard asks for a data of a set size every N seconds, i.e. 2048, depends what it's latency MME / DX / ASIO buffer is set to.
So all the read ahead and timing management of audio should be done on the processor, and passed as a stream to the buffer, trick is to not let the game go over a certain framerate.
perhaps it's possible to get the onaudioread to repeat the last buffer in case fps is too slow to maintain audio, will need to figure it out.
InvokeRepeating function is in the same timeframe as update, yield, etc, it is for shooting cannon things and so on but not for framerate independent stuff, if i get it right.
So the main rules with the audio function is that it reads many small buffers very fast, interleaves them on multiple channels, and it's difficult and highly customizeable to send audio data to the sound card independent of the processor.
I just ran the last bit of the code provided that is supposed to work, using an audio clip and audiosource, and i have an error, onaudiofilter read data not set to instance of an object at line 60...
and i think that it should use 2048 size steps between buffers rather than 1024 because it's supposed to queue buffers like train carriages.
Hello :)
If you have a specific question, you should make your own post here or on the forum (I can't tell about the error you got as you did not post your code).
Also you might need a different solution for processing audio depending on what application you are trying to make.
regards