- Home /
Need help/advice with a Unity WebGL Voice chat implementation
Hey All,
I have a Unity multiplayer game using PUN on WebGL, where I need to include a Voice Chat feature.
The problem is that Unity WebGL currently has very little support for the Microphone class, so existing solutions like PUN Voice do not work. This left me with trying to make a js plugin to access the microphone and send the samples to Unity, where I send them through PUN RPCs to other players. On the receiver end, I have some more native code to play the audio samples through the speakers (as AudioClip.SetData also doesn't seem to work for constantly changing buffers on WebGL).
Here is what I have so far:
JS code for accessing the microphone, and sending microphone input to Unity as a string of samples:
Call_MicroPhone: function(){
//if(!audioContext){
audioContext = new AudioContext();
//}
if (!navigator.getUserMedia)
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia || navigator.msGetUserMedia;
if (navigator.getUserMedia){
BUFF_SIZE= 2048;
navigator.getUserMedia({audio:true},
function(stream) {
alert('start capturing audio.');
console.log("StartMic");
gain_node = audioContext.createGain();
gain_node.connect( audioContext.destination );
microphone_stream = audioContext.createMediaStreamSource(stream);
//microphone_stream.connect(gain_node);
script_processor_node = audioContext.createScriptProcessor(BUFF_SIZE, 1, 1);
script_processor_node.onaudioprocess = function(event) {
var i, N, inp, microphone_output_buffer;
microphone_output_buffer = event.inputBuffer.getChannelData(0); // just mono - 1 channel for now
// microphone_output_buffer <-- this buffer contains current gulp of data size BUFF_SIZE
};
microphone_stream.connect(script_processor_node);
script_processor_fft_node = audioContext.createScriptProcessor(2048, 1, 1);
script_processor_fft_node.connect(gain_node);
analyserNode = audioContext.createAnalyser();
analyserNode.smoothingTimeConstant = 0;
analyserNode.fftSize = 2048;
microphone_stream.connect(analyserNode);
analyserNode.connect(script_processor_fft_node);
script_processor_fft_node.onaudioprocess = function(event) {
microphone_out_buffer = event.inputBuffer.getChannelData(0);
// get the average for the first channel
var array = new Uint8Array(analyserNode.frequencyBinCount);
analyserNode.getByteFrequencyData(array);
// draw the spectrogram
if (microphone_stream.playbackState == microphone_stream.PLAYING_STATE) {
//console.log('--->show some data--->'+array.toString()); SendMessage("CharacterSound","OnGotStream",microphone_out_buffer.toString());
//show_some_data(array, 5, "from fft");
}
};
},
function(e) {
alert('Error capturing audio.');
}
);
} else { alert('getUserMedia not supported in this browser.'); }
}
And here the audio samples are sent to other players via an RPC:
[DllImport("__Internal")]
private static extern void Call_MicroPhone();
void Start () {
Call_MicroPhone();
}
public void OnGotStream(string streamData){// The recieved samples are placed into a buffer so that we can send them after receiving a certain amount of data at regular intervals
inputStringArr = streamData.Split (","[0]);
floatArrayToSend = new float[inputStringArr.Length];
for(int i=0;i<inputStringArr.Length;i++){
floatArrayToSend[i] =float.Parse(inputStringArr[i]);
}
}
void Update()
{
if (photonView.isMine && Time.time-lastUpdateTIme>.05f) {//Every .05 seconds the audio samples currently in the buffer are sent
lastUpdateTIme=Time.time;
photonView.RPC("Call_CharacterAudio",PhotonTargets.Others);
}
}
On receiving the RPC, audio samples are sent back to the native code:
[DllImport("__Internal")]
private static extern void PlayAudioFromBuffer (float[] arr, int size);
public void Call_PlayAudioFromBuffer(float[] arr, int size){
PlayAudioFromBuffer (arr,size);
}
[PunRPC]
public void Call_CharacterAudio(){
#if UNITY_WEBGL
Call_PlayAudioFromBuffer (floatArrayToSend,floatArrayToSend.Length);
#endif
}
Finally, the samples are played through the speakers:
PlayAudioFromBuffer : function(arr,size){
if(audioContext == null){
audioContext = new AudioContext();
}
var myBuffer = audioContext.createBuffer(1,2048,44100);
var nowBuffering = myBuffer.getChannelData(0);
for(var i=0;i<size;i++){
nowBuffering[i] = HEAPF32[(arr>>2)+i];
}
// console.log('-------->'+nowBuffering.toString());
//myBuffer.copyToChannel(arr,0,0);
var source = audioContext.createBufferSource();
source.buffer = myBuffer;
source.connect(audioContext.destination);
source.start(0);
}
Results thus far:
I seem to be able to get the microphone data, and if played back locally through the speakers, it works reasonably well. However, when I pass the stream samples to Unity, and send it over to the recipient, the audio on that end is almost entirely just noise.
Apologies for the noobish code. I am pretty new to both Unity and audio programming so I would really appreciate the help! Thanks!
Answer by jocyf · Apr 11, 2020 at 07:32 PM
Hi, I'm looking into making a voice chat for multiplayer in WebGL. I was thinking about getting the audio stream using jslib code to capture the microphone and pass it to photon voice. But it isn't posible at all. Photon.Voice is not an option (it doesn't work in WebGL because multithtreating).
Your solucion isn't a bad idea at all. Passing the stream using a usual RPC ¿Did you get it working or it is a dead end?
Answer by FrostweepGames · May 17, 2020 at 09:23 AM
Hello, look at this solution: https://github.com/frostweep/PUN2VoiceWebGLUnity It provides functionality for voice chat for different paltforms including webgl.
Your answer
Follow this Question
Related Questions
Question about multiplayer game 0 Answers
How to upload WebGL games on reputed websites? 1 Answer
WebGL, Facebook drag background image 0 Answers
Is Unity Webgl Game Work In Android Mobile? 1 Answer
Unity 5 multiplayer free look camera 0 Answers