- Home /
Replicate shader side Tex2D() with CPU side code
Hi, Im making a shader that used a texture to displace vertices along the y-axis of a plane. And it is crucial to be able to replicate the exact same value in my C# code as in my vertex shader (to prevent gaps between lod levels).
ie, i need a tex2D (_MainTex, v.texcoord.xy) type function on cpu side.
I am using the exact same uv cordinate both in shader & c# code (fetching a copy of it from the mesh itself), but neither Texture2D.GetPixel(int x, int y) nor Texture2D.GetPixelBilinear(uv.x, uv.y) returns the exact same value as cg counterpart tex2D(Sampler2D, float2).
It seems tex2D is making an interpolation of nearby pixels. When the texture is the same color, or large areas of the same color, the result is corect, however at the edge where the the color of the texture changes the values no longer match.
Making the shader show the texture on the two objects prove that the uv cordinates match up correctly as well. Just the mysterious interpolation that is mystifying me.
Can anyone explain to me exactly how tex2D reads out of the uv cordinate, how does its interpolation algorithm work?
Also I'm stuck with shader model 3.0, so no cg tex2Dfetch which (should) use exact pixel cord & return exact pixel value.
Thanks for any advice
Edit: Solved (sorta)
Ended up just doing the interpolation directly in the shader, thus going around the problem of replicating the tex2D.
This effect is probably due to filtering and mipmapping. You won't be able to easily replicate what the GPU is doing, especially because the implementation is entirely up to the vendor. So even if you do eventually get the same results on your computer, it probably won't work on another computer.
Why do you need to replicate this behaviour?
I have two planes with different vertex counts, both using the same map-height function (shader gpu side), and an object floating on the surface created by the same function, but cpu side. I need it to both sync the object to perfectly stay on the surface of the planes, and more importantly, interpolate the two planes sides to fit each other seamlessly.
Which is easier to do on the cpu side, as i dont want to have 1000 drawcalls if i have 1000 objects to tell each one if they have to match up to lod1 or lod2 vertex height.
Would be wonderful if shader could just return the actual height it calculates, but doesnt seem possible without compute shaders, which are beyond my target hardware.
I dont use mipmaps and no apparent filtering.
Hi, I think I did the same kind of thing you want to do: I had to make sure that the texture is uncompressed (ARGB32), not resized by Unity ("$$anonymous$$ax Size" parameter high enough for your platform). Using point sampling could help but you may need bilinear to make smooth transition between different texels/heights. To make sure I have the same result, I coded my own bilinear both in shaders and c# using Tex2DLod and point sampling + clamp mode. Also, if your platform has enough memory, I recommend to store heights in a table (of floats/color32/int...) ins$$anonymous$$d of calling Texture2D.GetPixel() that can be slow (c# to c++ api calls are slow). Now, it may also be linked to the encoding of the heights in the texture, this can be tricky...
Defenetly going to try that, but not quite sure how to code my own bilinear function on shader side. Could you provide any pointers on how to get started, also what do you mean by point sampling? Thanks.
edit: The height of the texture is simply a sin function of the R channel.
It seems using (below) results the same value.
tex2D (_Global$$anonymous$$ap, v.texcoord.xy)
tex2Dlod (_Global$$anonymous$$ap, float4(v.texcoord.xy,0,0))
Using Clamp, Filter $$anonymous$$ode point, uncompressed rgba 32 bit, 1024x1024 texture.
Ho, by "point sampling" I mean "point" filter mode. There is a tutorial+implementation for bilinear here : http://www.catalinzima.com/xna/tutorials/4-uses-of-vtf/terrain-rendering-using-heightmaps/ But if you only use the red channel it should be ok to use the shader's regular bilinear functions. It's important to not use it if you encode float values in multiple texture channels like explained here for example: http://scrawkblog.com/2013/06/27/encodedecode-floating-point-textures-in-unity/ I have no more idea of what could be wrong, maybe if you post some parts of your code ? Edit: second link is not interesting, except the comment. This one is better: http://www.gamedev.net/topic/637446-packing-float-into-rgba-texture/, There are some functions to encode/decode in UnityCG.cginc too