- Home /
Playing each channel of a midi track on separate audio sources?
Hello,
So I've recently undertaken playing with a synthesizer in unity that can be found here: [http://forum.unity3d.com/threads/130104-UnitySynth-full-Xplatform-midi-synth][1]. This project works right from download and I've spent quite a bit of time with it and is definitely what I'm looking for with a few exceptions.
First the way I am using it is to send individual midi messages that specify the instrument, note, volume, pitch bend, etc. And this works. For example, I can play the same 6 notes on 3 different midi channels and all the notes play the way I want.
My problem is that I would like to be able to apply sound effects and filters to each channel individually. But it seems like each midi channel is put together into a single sample buffer and then read by "OnAudioFilterRead" as you can see in the code below.
public float[] sampleBuffer;
private void OnAudioFilterRead (float[] data, int channels)
{
//This uses the Unity specific float method we added to get the buffer
midiStreamSynthesizer.GetNext (sampleBuffer);
for (int i = 0; i < data.Length; i++) {
data [i] = sampleBuffer [i] * gain;
}
The GetNext function called in a different script:
public void GetNext(float[] buffer)
{//Call this to process the next part of audio and return it in raw form.
ClearWorkingBuffer();
FillWorkingBuffer();
for (int x = 0; x < effects.Count; x++)
{
effects[x].doEffect(sampleBuffer);
}
ConvertBuffer(sampleBuffer, buffer);
}
Clearworkingbuffer does what the name implies. The fillworkingbuffer calls on the "voice" of the synthesizer to process an audio sample and then send that data to the buffer. Each voice is responsible for playing an individual note. So if I play 1 note on 16 channels, 16 voices would be used. Or if I played 4 notes on 4 channels, 16 voices would still be used.
if (synth.Channels == 2 && inst.allSamplesSupportDualChannel() == false)
{
float sample = inst.getSampleAtTime(note, 0, synth.SampleRate, ref time);
sample = sample * (velocity / 127.0f) * synth.VolPositions[channel];
workingBuffer[0, i] += (sample * fadeMultiplier * leftpan * gainControl);
workingBuffer[1, i] += (sample * fadeMultiplier * rightpan * gainControl);
}
After that happens, the buffer is converted so it can be read by the OnAudioFilterRead fucntion:
private void ConvertBuffer(float[,] from, float[] to)
{
const int bytesPerSample = 2; //again we assume 16 bit audio
int channels = from.GetLength(0);
int bufferSize = from.GetLength(1);
int sampleIndex = 0;
//UnitySynth
if (!(to.Length == bufferSize * channels * bytesPerSample))
Debug.Log("Buffer sizes are mismatched.");
for (int i = 0; i < bufferSize; i++)
{
for (int c = 0; c < channels; c++)
{
// Apply master volume
float floatSample = from[c, i] * MainVolume;
// Clamp the value to the [-1.0..1.0] range
floatSample = SynthHelper.Clamp(floatSample, -1.0f, 1.0f);
to[sampleIndex++] = floatSample;
}
}
}
So from what I can tell, the audio from each voice is processed one right after another and all put together into a single buffer that is then played. I've been trying to get each individual voice to write to a channel specific buffer at this section and then play each different buffer on a separate audio source.
workingBuffer[0, i] += (sample * fadeMultiplier * leftpan * gainControl);
workingBuffer[1, i] += (sample * fadeMultiplier * rightpan * gainControl);
I've been at this for a little longer than a month now and I can't seem to get it to work, more or less sound right. I'm hoping someone with a little more skill can point out the problems of separating the channels.
Also, I apologize for the hack job on the code I posted, I believe these are the relevant sections, because the two scripts in their entirety are close to 1000 lines.
I haven't used this synth but it sounds like a cool idea, and I have been think of similar ideas unrelated to the synth but maybe this could work.
Is it posible to call the synth multiple time? If so just call it many times but only play one channel each time you call it and then set the volume for each track independently. $$anonymous$$ight be too complex for what your looking for, but i'v only used outside DAW's that can spilt my tracks even before i put them into unity so no expert here, just an idea.
I tried when I first started out but the sound was... well it kind of sounded like a squirrel massacre. $$anonymous$$y new thought is to try and write each voice to a separate buffer and then use the buffer to "SetData" in an audio clip and loop the clip.
Well, I tried using OnAudioRead to write each channel to a different audiosource which sort of worked. I got sound out but there was a lot of clicking and pauses throughout the note being played. Looks like I'm heading back to square one.