- Home /
Accessing camera "backbuffer" in surface shader
Hi Everyone - this is my first question here, so I hope I'm following all the rules. Checked the existings, but couldn't find anything, so I'm creating my own.
Essentially, I'm trying to execute a refraction shader that utilizes output from a camera (based on this gpu gem: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter19.html)
What is the best (most performant/efficient) way to pass camera render data to a surface shader as a texture?
I've got a really rough experimental start using C#, with the following script attached to a camera:
using UnityEngine;
using System.Collections;
public class camPostRenderScript : MonoBehaviour {
public Renderer display;
Texture2D tex;
void Start () {
tex = new Texture2D(Screen.width,Screen.height);
}
void OnPostRender() {
tex.ReadPixels(new Rect(0,0,Screen.width,Screen.height),0,0);
tex.Apply(false);
display.material.mainTexture = tex;
}
}
However, this has drawbacks in that it only references one screen object at a time, and using C# instead of shader code feels like it's taking work off of the gpu and putting it onto the gpu which I think has performance implications (if any experts want to speak on that, I'm all ears).
Is there a way to create a shared texture that all of the refraction shaders will be able to access? I suppose I could make that "tex" var accessible, but is there a way to pull that in with a surface shader? Is it super heavy to write per frame like this? I'd imagine it's cheaper than using RenderToTexture() - is that correct? I'd prefer to avoid using RenderToTexture as I get the impression that it's pretty heavy. Is this method faster/lighter?
Alternately, is there another, "shader native" way to access a camera "backbuffer" like this that might be more efficient/live on the gpu?
Any input would be greatly appreciated :)
Thanks Everyone!
Answer by Dave-Carlile · Aug 05, 2013 at 05:47 PM
You're likely not going to have much luck with ReadPixels
. Copying data from the GPU isn't going to perform well, especially a full screen of it.
Unless I'm missing something, Render Textures are the way to go. Render the scene to the render texture, then use it like you would a normal texture. The data remains in GPU RAM so there's never an issue with moving it around.
Thanks Dave :) I was hoping to avoid using Pro features, but that does sound exactly like what I need. Stays on the GPU and everything. I suppose there's no clean alternative in Standard / self-code solution?
I'm not sure if there's a way to roll your own render textures or not. Seems like it might be possible, but I'm not knowledgable enough with Unity to know if it is.
Sounds about right. Thanks again. Now have more tools for research :)
If my answer was helpful, please click the "accepted" checkmark.
As an additional note, you don't need to render everything on a render texture, just what you need to blend.
Your answer
![](https://koobas.hobune.stream/wayback/20220613111604im_/https://answers.unity.com/themes/thub/images/avi.jpg)