- Home /
Why are the pixels in a shader only confined to the mesh?
Hello, I am still quite new to shaders, but I've been learning shader graph and have been wondering why the pixels on shaders have to be within the bounds of the mesh when rendered. The reason I wonder this is because I've been working on an aurora borealis shader where I create streaks that go horizontally (I have this part done), but would like to "smear" those streaks upwards vertically like a real aurora. The only method I've found was to have a bunch of copies of the streaks that are displaced upwards, but this doesnt seem very efficient, and also at the edge of my plane they get cut off. There is also just copying and pasting the plane with the shader a bunch of times and offsetting them vertically, but this also seems ineffecient... Could I not just somehow tell the pixels being rendered to also apply their color to the pixels above them?
Answer by Pangamini · Mar 04, 2020 at 07:03 PM
Pixels are rendered on the geometry. Pixel shader receives interpolated data from the vertices (as triangle is split to pixels). No, you can't render to the portion of the screen that's not covered by the geometry. BUT: You can extend the geometry itself. Either by your script / modelling tool (CPU-based), or by vertex shader(GPU) - which is responsible for calculating the vertex position on the screen. Then, you are able to draw to all areas covered by the new geometry. Another approach is to use something like a post-processing effect - which is basically rendering of a fullscreen quad - which means your shader gets to write everywhere on the screen. For example, you could only draw a line that determines the base of the aurora, and then 'smear' it, kind of like blur effect would do, but only in one direction. Unity has a built-in method for such commands - Graphics.Blit(). It receives a source and a destination texture together with a material to be used.
So my quick thought suggestion would be:
Render the aurora base (some line) into a custom RenderTexture that matches the camera's renderTarget (usually the screen buffer)
Apply the smear effect, perhaps with a couple of iterations
Blend the final aurora screenspace texture with the actual screen, perhaps at the same time as skybox is being rendered (right after it)
Or simply create a geometry with the 'smeared' texture on it, that would be the cheapest solution