- Home /
Best way to render viewport at lower resolution for performance?
I'm targeting Android and iOS and am trying to figure out a good way to have the camera viewport render at a lower resolution than the game overall - mainly to boost performance but also to give the game a retro, pixelated look. I want to only reduce the resolution of the camera viewport and not the whole game, since I need the UI to remain sharp (and I would also not be able to do point resampling and it would look blurry rather than pixelated).
Currently I do this using a render texture - I deactivate the Camera component of the main camera and use a second camera to render the actual scene, which I attach this script to:
Visually this works reasonably well. However, it does not seem to actually have much performance impact whatsoever. On a low-end device which is struggling with framerate, I get 11 FPS when the factor is 1 (i.e. it's not downsampling at all, just drawing at native resolution), and 14FPS when the factor is 0.25 (i.e. it's supposed to render at a quarter of native res). Unless I have a huge bottleneck somewhere else this doesn't seem right, since a quarter of native res should be making an enormous performance difference.
Is it actually rendering at full res and then just downscaling? That would obviously defeat the purpose - is there a better way to do this than what I set up?
Did you check the profiler? You can attach it to the running Android app. That'll tell you if you have a bottleneck somewhere, you shouldn't be guessing when it comes to performance.
As for how rendering to texture work, I'm not sure the exact implementation, but it probably just renders it at the desired resolution. If it where rendering at native and then downscaling you would get worse FPS (more code running to downscale), but you see an improvement.
I have indeed been monitoring the profiler attached to the Android app - details here: https://answers.unity.com/questions/1624567/extremely-poor-android-performance-even-in-simple.html
I have since been able to improve things somewhat by more aggressive optimization in many places but the core issue still remains.
Answer by Bunny83 · Apr 22, 2019 at 03:04 PM
It just means the reason for your bad performance is not the fragment shader of your objects. Everything else is still the same when you render at different resolutions. Older / low end devices supported only a few draw calls (keeping it below 10 was necessary for some devices). So on most mobile devices the actual bottleneck is the bridge between CPU and GPU. The GPU itself isn't that bad.
What shader(s) do you actually use? The Standard shader isn't suited for mobile, let alone old devices.
Thanks for the reply!
I currently use five shaders-
$$anonymous$$obile / Diffuse (for realtime/moving objects)
$$anonymous$$obile / Vertex Lit (for the vast majority of objects, all lit by a lightmap)
$$anonymous$$obile / Particles Additive and $$anonymous$$obile / Particles Alpha Blend (for particles/effects)
Custom / Emission (for objects that should appear to emit light)
The Custom / Emission shader is built upon the Legacy / Diffuse one so it shouldn't be causing too many issues (if I disable it entirely it doesn't have a performance impact, anyway).
All this being said, while I stick to the $$anonymous$$obile shaders I do have quite a few materials - around 15 for objects that'll always be in the game, then another 15 or so that only sometimes get used.