- Home /
Does filling a RenderTexture using a pixel shader require a camera?
I'd like to fill a 3D RenderTexture
using a pixel shader (i can do this via a compute shader currently) to represents voxel data. These 3D RenderTexture's
will subsequently be sampled to draw stuff on the screen.
I know that Texture3D
(note: Not RenderTexture
) can be written to on the CPU via the SetPixels API. I'm not interested in that.
TL;DR: I don't know the right way to go about this in Unity. I just need the pixel shader to be invoked WxH times. Each of those invocations will write to all the depth slices (essentially filling a voxel column). Nothing will be drawn to the screen.
Extra info:
The RenderTexture uses enableRandomWrite
and the pixel shader references it via RWTexture3D
I'm currently creating a quad mesh and using Graphics
and GL
classes to try to accomplish this:
My FillMetavoxel
function is as follows:
void FillMetavoxel(int xx, int yy, int zz)
{
mvFillTextures[zz, yy, xx].Create();
Graphics.SetRandomWriteTarget(0, mvFillTextures[zz, yy, xx]);
//Graphics.SetRenderTarget(mvFillTextures[zz, yy, xx]);
matFillVolume.SetPass(0); // activate first pass in shader associated with this material
Vector3[] vertices = new Vector3[]{new Vector3(0, 0, 0),
new Vector3(1, 0, 0),
new Vector3(1, 1, 0),
new Vector3(0, 1, 0)};
int[] triangleIndices = new int[] {0, 1, 2, 0, 2, 3};
Mesh quad = new Mesh();
quad.vertices = vertices;
quad.triangles = triangleIndices;
GL.PushMatrix();
// I'm doing something stupid here. I know it.
GL.LoadPixelMatrix(0, 32, 0, 32);
Graphics.DrawMeshNow(quad, Vector3.zero, Quaternion.identity);
Debug.Log("Done filling " + xx + ", " + yy + ", " + zz);
testCube.renderer.material.SetTexture("_Volume", mvFillTextures[zz, yy, xx]);
GL.PopMatrix();
}
The shader attached to testCube's material is:
Shader "Custom/Fill Volume" {
Properties {
}
SubShader {
Pass{
Cull Off ZWrite Off ZTest Always
CGPROGRAM
#pragma target 5.0
#pragma vertex vert
#pragma fragment frag
#pragma exclude_renderers flash gles opengl
#pragma enable_d3d11_debug_symbols
#include "UnityCG.cginc"
RWTexture3D<float> volumeTex;
struct v2f {
float4 pos : SV_POSITION;
};
// We want the quad to be rasterized s.t each fragment is a voxel column
v2f vert(appdata_base v)
{
v2f o;
o.pos = float4(v.vertex.xy, 0.0f, 1.0f); // already in clip space
return o;
}
// fill a depth column of the 3D RT
half4 frag(v2f i) : COLOR
{
uint2 index = i.pos.xy * 32;
int slice;
for(slice = 0; slice < 32; slice++) {
volumeTex[uint3(index, slice)] = half4(1.0f, 0.0f, 0.0f, 1.0f);
}
discard; // this shader draws nothing to the screen
return half4(0.1f, 0.3f, 0.7f, 0.8f);
}
ENDCG
}
}FallBack Off
}
What's the best way to ensure that a 32x32 pixel square is rasterized completely and the pixel shader is invoked 1k times? I'm guessing it has something to do with using an additional camera, but my pixel shader doesn't output a color (it just does UAV writes).
Answer by raja-bala · Sep 12, 2014 at 06:42 PM
I got the 2D and 3D UAV writes to work with a secondary camera in the scene. I attached a render target whose size depicted the 'compute work' i wanted to do, and used Graphics.Blit
with a material that filled the UAVs.
I plan to use this setup for filling voxels in my scene, so this had to happen several (100?) times every frame. To ensure that no time was wasted on the render target attached to the secondary camera, I set its clear flags to none and cullmask to 0 (guessing this means 'Nothing').
I verified using Intel GPA that the process is optimal - no wasted effort on the secondary camera.
Those interested can find the 2D & 3D UAV writes using PS & CS scene here.
I'd still like to know if there're better ways to do this (w/o having a secondary camera and RT attached to it). I couldn't get it to work using Graphics.DrawMesh(Now)
Hi! The link is broken, do you still have that demo somewhere? It would be very useful for me! Trying to write to a 3D texture from a fragment shader whithout any success...