Best way to change the downscale algorithm for Camera to RenderTexture operations
[Win 10, DX12] System compatibility is secondary here.
I'm looking to render a Camera view into a very low-resolution RenderTexture (down to 2x1, don't ask). The issue I have with the default "out of the box" way of attaching a RenderTexture to a camera is that it looks like only some of the high res pixels get mapped onto the 2x1 RenderTexture. In other words, it looks like the default behavior is to isolate a pixel from the high res image and just map that to a pixel of the 2x1 texture. So only 3 pixels (possibly 3quads) of my original camera image are actually used. If objects are moving in between these pixels, they simply are NOT displayed in my 2x1.
What I'm looking to accomplish: I would like the average color of each half of the viewport to be calculated and have that color be placed in each pixel of my 2x1 RenderTexture. So if I have a red object in one half of the viewport, no matter where it is it will increase the red levels in that pixel (and not be ignored).
How do I best accomplish this?
PS: I would like to see if using a shader is possible for performance.
Any ideas on what the best way to proceed would be?