Server-side rendering with no display
I hope we can get some advice on what could be the best approach to some experimental code we're trying to implement.
We would like to be able to have some scene hosted in a (linux) server. The server should do the rendering of frames into GPU buffers from where a native plugin will take over for further processing. It is important to note that we don't really require the scene to be displayed in any way.
My teams is starting with Unity development (and even with graphic development) so there's a lot of we have yet to learn.
What we have done so far:
We have a working native plugin for the processing of the frames... but this works basically inside the editor (windows)
To investigate how to proceed, We've set up a much simpler project, without the native plugin part, writing periodically frames as PNGs. This seems to work when building a linux player with normal settings.
But, if we build with the "Server Build" flag set or running with the -batchmode -nographics options the log reports it is using the NullGfxDevice and frames will be always rendered as transparent PNGs. (In case it matters, I'm building with the linux editor 2018.3.6f1)
In the simplest scenario, we have a Camera and we call camera.Render() on a targetTexture (a RenderTexture passed as argument) called from Update() event. There's a coroutine that waits for the WaitForEndOfFrame, copies the RenderTexture to a Texture2D and writes it as PNG to disk.
so our questions are:
Is what we try to do at possible at all? There seems to be questions out there with a case we think it could be similar to our. We've tried to model our simplest scenario with that, but it does not seem to work.
What are the differences between building with "Server build" True or False (and later using the -batchmode -nographics flags)
Could there be any command line option that we could use to force the initialization of a graphic driver?
I've seen references describing something similar, but the solution basically seems to amount to setting up a fake/invisible X-server so that the unneeded "drawing" is done on the invisible X-server (sort of redirecting to /dev/null). Is this the only way?
One route I could think of is taking the task of initializing the graphics library at a native plugin and then provide APIs for creating/releasing native textures that could be used for the rendering inside Unity through the CreateExternalTexture (however this only seems to apply to Texture2D). Would something like that work? I'm unsure because there seem to be no way of identifying which type of native texture is being used (because I guess the real use case assumes that all textures are basically of the same type as the GfxDevice).
A route similar to the previous one could be as described here: A native plugin taking care of (all?) the rendering. Here perhaps what we would like to understand is how much Unity functionality we will be losing.
Thanks very much in advance and best regards
Julian