- Home /
how can I send a render texture over the network?
hello.
I have a unity server and a client application, both running completely different scenes. I would like to send a render texture that was rendered on the server to the client over the network and display it there on a surface.
How can I achieve this?
i want to send video, but for the moment video compression is not required, ie it's enough to send single frames at a rate of 5-10 fps. client and server are for this prototype running on the same computer anyway, so transfer rate is not an issue.
To convert, see answers like
http://answers.unity3d.com/questions/27968/getpixels-of-rendertexture.html
http://answers.unity3d.com/questions/9969/convert-a-rendertexture-to-a-texture2d.html
As you mentioned later, use Texture2D.EncodeToPNG as well as Texture2D.LoadImage to transform from and to byte[].
Regarding sending that info or any info. In Unity, RPCs are very limited. Any time you want to move stuff around, you just have to encode it in to strings...
http://unity3d.com/support/documentation/Components/net-RPCDetails.html
Notice the sentence "You can use the following variable types as parameters in RPCs:"
For extreme networking problems of this type, you may have to go to the iOS / OSX level. Look for "Native Toolkit - Unity to iOS bridge." Once at the iphone/mac native networking level, look to AsyncSocket (the usual networking library in iOS) or Game$$anonymous$$it (an 'ez networking' starter system which is built-in).
Important Note !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Sycla claims that this documentation
http://unity3d.com/support/documentation/Components/net-RPCDetails.html
is wrong, and there are other types supported by RPC, notably byte arrays. If so, that's incredibly awesome, and that information utterly swamps in importance everything else at hand. Triple-cheers, Sycla!
Not really- since once you've done that, you still need to send it over the network! Which is the bit I object to, because sending images is always going to be slow, and I'm not sure how you would go about sending fullscreen uncompressed image data 15 times a second.
We're talking specifically Networking here, and no, it's not straightforward to send objects that aren't simple primitives (for example, texture data).
Nope! You can use byte arrays as well! Strings run out of characters depressingly fast, so they're useless for sending this kind of data. However, one of the (many) undocumented features is that RPCs support byte array paramaters, as well as the ones they tell you about! But, as I said, it's not straightforward. Although, since all the code for it is outlined above (in my post), I suppose you could say it is straightforward!
Here's some code that might help. Some definitions you'll want at the top:
Texture2D tempTex
Texture2D remoteTexture;
and in your Update or elsewhere:
if (Network.peerType == NetworkPeerType.Server || Network.peerType == NetworkPeerType.Client)
{
byte[] bytesToSend = tempTex.EncodeToPNG();
//Debug.Log("Sending bytes: " + bytesToSend.Length);
if (bytesToSend.Length > 0)
networkView.RPC("ReceivePNG", RPC$$anonymous$$ode.Others, bytesToSend);
else
Debug.LogError("Bad length of bytes to send.");
}
then on the other end:
[RPC]
public void ReceivePNG(byte[] bytes)
{
if (bytes.Length < 1)
{
Debug.LogError("Received bad byte count from network.");
return;
}
remoteTexture.LoadImage(bytes);
//now apply your new texture as you please
}
Note that if you use SetPixels to define tempTex, you don't have to Apply() before encoding to PNG
Answer by syclamoth · Nov 30, 2011 at 09:51 AM
Sending streaming video data over a network connection isn't really somthing Unity is optimised for. It doesn't provide any serialization classes for video streams, so you'd have to use some kind of 3rd-party plugin for that.
However, for this specific case, is there any reason why you need to run the server and client in completely different scenes? If a client has all the relevant information to recreate a scene on the server, you can just use a camera/render-texture pair to create the picture on clientside, using the (much thinner) position/rotation information of every dynamic object! You'd get better picture quality that way, too.
EDIT: Ok, I'll give you some example code to get you started. I still have my misgivings about this whole idea, but if you're sure you want to do it this way, this is how you'd do it. C# because this is kind of complicated and I don't know enough JS to do it in that.
First up, the texture data from the RenderTexture-
public Color[] GetRenderTexturePixels(RenderTexture tex)
{
RenderTexture.active = tex;
Texture2D tempTex = new Texture2D(tex.width, tex.height);
tempTex.ReadPixels(new Rect(0, 0, tex.width, tex.height), 0, 0);
tempTex.Apply();
return tempTex.GetPixels();
}
Now that you have an array of Colors, you can convert this into a byte array for sending it over the network. This is a two-stage process, to account for discrepancies in assembly versions between the standalone players on different architectures.
You'll need to include all of these libraries (phew!).
using System;
using System.IO;
using System.Runtime.Serialization;
using System.Runtime.Serialization.Formatters.Binary;
using System.Reflection;
byte[] SerializeObject<T>(T objectToSerialize) { BinaryFormatter bf = new BinaryFormatter(); MemoryStream memStr = new MemoryStream();
bf.Serialize(memStr, objectToSerialize);
memStr.Position = 0;
//return ""; return memStr.ToArray(); }
T DeserializeObject<T>(byte[] dataStream) { MemoryStream stream = new MemoryStream(dataStream); stream.Position = 0; BinaryFormatter bf = new BinaryFormatter(); bf.Binder = new VersionFixer(); T retV = (T)bf.Deserialize(stream); return retV; }
sealed class VersionFixer : SerializationBinder { public override Type BindToType(string assemblyName, string typeName) { Type typeToDeserialize = null;
// For each assemblyName/typeName that you want to deserialize to
// a different type, set typeToDeserialize to the desired type.
String assemVer1 = Assembly.GetExecutingAssembly().FullName;
if (assemblyName != assemVer1)
{
// To use a type from a different assembly version,
// change the version number.
// To do this, uncomment the following line of code.
assemblyName = assemVer1;
// To use a different type from the same assembly,
// change the type name.
}
// The following line of code returns the type.
typeToDeserialize = Type.GetType(String.Format("{0}, {1}", typeName, assemblyName));
return typeToDeserialize;
}
}
You need all of that code to serialize the objects, but when that's done you can use
byte[] colourArray = SerializeObject<Color[]>(GetRenderTexturePixels(myRenderTexture));
to convert a rendertexture into a byte array that you can send over the network.
When you have that, just use either an RPC or normal serialization to send the object every network update. I warn you, it's big.
When it arrives at the other end (however you do that), use
Color[] receivedColours = DeserializeObject<Color[]>(receivedBytes);
Texture2D receivedTex = new Texture2D(get the dimensions somehow (I guess 2 extra numbers aren't such a big deal));
receivedTex.SetPixels(receivedColours);
receivedTex.Apply();
Now, you can't shunt that back into a RenderTexture- however, there's not really any need to, since you can just display than on an object now.
EDIT Looks like there's another step in here, too, if you need to use Color32s. We'll have to convert the Color32 array to a type that can be serialized properly, before we can send it over the network. Use something like this-
[System.Serializable] public class SerializableColor32{ byte r; byte g; byte b; byte a; public SerializableColor32(Color32 source) { r = source.r; g = source.g; b = source.b; a = source.a; } public Color32 ConvertToColor32() { return new Color32(r, g, b, a); } }
public SerializableColor32[] MakeSerializable(Color32[] input) { SerializableColor32[] retV = new SerializableColor32[input.Length]; for(int i = 0; i < input.Length; i++) { retV[i] = new SerializableColor32(input[i]); } return retV; }
public Color32[] BackToColors(SerializableColor32[] input) { Color32[] retV = new Color32[input.Length]; for(int i = 0; i < input.Length; i++) { retV[i] = input[i].ConvertToColor32(); } return retV; }
Then, using those functions, you can do this-
byte[] colourArray = SerializeObject<SerializableColor32[]>(MakeSerializable(GetRenderTexturePixels(myRenderTexture)));
And this-
Color32[] receivedColours = BackToColors(DeserializeObject<SerializableColor32[]>(receivedBytes));
(phew)
yes - the server and the client must run different scenes (i.e. the client must not have access to the data present on the server). Also, this is not going to be a real game implementation but rather for a proof-of-concept project. I need to send a render texture from the server to the client.
Basically, you need a plugin to do that. Why is it so important that the client not have access to the server's data? In any case, you need only send the data relevant to that particular frame- there's no need to send everything, and even then there's still no reason to have the server in a different scene. Networking isn't magical- nothing gets sent if you don't tell it to get sent.
In what way do Color structs not have alpha values?!?!? That isn't a reason to require Color32s. However, the slightly quicker version of SetPixels, is.
I just think to recall that, although the getPixels function returns 4component vectors, the alpha value is ignored - but i might be wrong. Just to give some feedback: Your method works for my purposes - thank you! However, i use Texture2D.EncodeToPNG as well as Texture2D.LoadImage to transform from and to byte[] - which turned out to be faster than the serialization. Thank you for your help!
Your answer

Follow this Question
Related Questions
Live screen share Unity3D 2 Answers
Unity networking tutorial? 6 Answers
Can FixedUpdate be synhcronized across devices 1 Answer
AllocatedID error 1 Answer
Network Discovery is not working over multiple devices. 1 Answer