- Home /
Is it possible to use a texture atlas for main texture but share a single normal map?
I have a texture atlas to denominate different values for a coin face but only a single normal map for the face ridges.
Whenever I shift my UVs during runtime to show the coin values on a shared materials on over 100 coin objects, it looks great and with only 1 draw call -- but unfortunately, it looks like I've also shifted the UVs for the normal map because only the top left mapping has the ridges and all others appears to be missing the normal map texture.
I'd hate to waste memory by tiling the same normal map image over and over again to create a large normal map (256 x 256 vs 1024 x 1024). (I also have other atlas files for other game objects and with shared normals so the texture memory size waste grows).
Is there a way I can employ a main texture atlas without having to tile the normal map?
Thanks, Manny
For TP pro I had to have one normal map per texture otherwise I could get alignment problems. Have you thought about displaying the coin value on a quad placed slightly in front of the coin's face, and swapping this image rather than the whole coins?
Answer by Bunny83 · Oct 27, 2014 at 01:03 PM
Well, usually a shader uses the same set of UV cooordinates for the mainTexture as well as the normal map. However you can write / modify a shader to use a different uv channel for the normal map. That way you can change the UVs for the main texture but you will keep the UVs for the normal map.
Another way would be to only have one set of UV coordinates but set shader parameters to tell the atlas tiling in u and v axis. Knowing that you can multiply the UVs by the tiling count and only use the fractional part as UV for your normal map.
Like MrSoad said, you can get alignment problems since the texture and the normal map use a different resolution and the UVs have different precision.
It's usually better to just use the same layout for both, main texture and normal map.
ps: Didn't you want to use a submesh for the coins face and the coins border in your other question? Of course using two seperate materials will cause 2 draw calls.
edit
Here it is. The example of the second solution.
The mesh consists of 4 quads combined in one mesh. The mesh has only one set of UVs. Since we use normalmapping the mesh needs vertex normals and tangents. Note: all textures and normal maps are simply taken from google image search ;) The atlas texture is one of the old terrain textures of minecraft. It's an atlas that contains a uniform grid of tiles. In the material inspector i've set the x and y values of the "Atlas Tiling" to 16 because the atlas has 16x16 tiles (256 tiles / textures).
The shader is like i said just the normalmap example shader with a few modifications:
Shader "Custom/NormalMappedAtlas" {
Properties {
_MainTex ("Texture", 2D) = "white" {}
_BumpMap ("Bumpmap", 2D) = "bump" {}
_AtlasTiling ("Atlas Tiling", Vector) = (1,1,0,0)
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRAM
#pragma surface surf Lambert
struct Input {
float2 uv_MainTex;
};
sampler2D _MainTex;
sampler2D _BumpMap;
float4 _AtlasTiling;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_MainTex, IN.uv_MainTex).rgb;
float2 bumpUV = frac(IN.uv_MainTex * _AtlasTiling.xy);
o.Normal = UnpackNormal (tex2D (_BumpMap, bumpUV));
}
ENDCG
}
Fallback "Diffuse"
}
The actual changes are:
I added a new property called "Atlas Tiling" (the shader variable name is _AtlasTiling)
Inside the CG code i declared that variable as vector (float4)
And just added that one line of code to calculate the uv for the normal map
This line:
float2 bumpUV = frac(IN.uv_MainTex * _AtlasTiling.xy);
does all the magic. Here'a an example to understand what actually happens. As we know the incoming uv coordinates are between 0 and 1. Since our atlas is uniformly tiled you will only use a rectangular fraction of that range. The rectangle of the first tile (near uv 0/0) has the uv coordinates (0,0) and (1/16, 1/16) since there are 16 tiles in each direction. So when multiplying the incoming values (which are between 0 and 1/16) with the tile count (that's 16) we get values between 0 and 1 and that's what we want for the normalmap.
The second tile however has the uv coords: (1/16,0) and (2/16,1/16). The resulting uv in x / u direction would be between 1 and 2. If the normalmap's wrapmode is set to repeat this would already work. However to bring the coordinates back into the 0 to 1 range we only use the fractional part by using the "frac" function. That's all.
As i said earlier this method has some problems / restrictions. The atlas need to be uniformly tiled. Here are some examples of atlas textures where this technique wouldn't work: different tile sizes1 different tile sizes2. Another problem is that you can't use 1 or 2 pixel padding between the textures since the tiles need to be edge to edge. That will cause problems when mipmapping is used and you might get color bleeding when you use bilinear or trilinear filtering. One solutions would be to only use every second tile so one tile would be padding. That's of course quite inefficient since you would waste about half of the texture space.
If color bleeding occures or when mipmapping gives you problems, the first solution would be better. There you can pack your atlas as you wish since you would pass the normalmap UVs seperately. But keep in mind that the second uv channel can't be used for lightmapping or other things in this case. It's also possible to put your second UV set into the color channel if it's not required
Thanks, and you got me on that other question as well. I'm my ultimate quest is to find a way to do both:
Avoid material cloning at all costs to preserve memory -- thus my aversion of changing texture scaling and offset during runtime.
$$anonymous$$inimize draw calls at all costs.
I realize that there will come a time when there needs to be a balance and trade off between the two but want to make sure I've researched all possibilities before I decide.
I am interested in the UV channel method you mentioned. Do you know of a tutorial or example I can look at? Would this require me to create my object with two UV mapping? I started down this path and in the object inspector, I see that my mesh has "uv, uv2".
I am aware of potential alignment issues but would like to see for myself if it would be acceptable.
Besides, its all learning which is cool.
Thanks, $$anonymous$$anny
Hmm, your point 1 and 2 are actually the same ;) Using only one material will result in only one draw call. Having multiple materials using the same texture for example doesn't increase the memory, it just creates a seperate draw call since the material settings are kind of the parameters of the draw call. Having different parameters requires you to invoke them as seperate draw calls.
Depending on the shader you're using and / or if you use lightmapping or not, the secondary UV channel might already be in use. In that case you can't use it for your normal map.
If your second channel is free, you can setup uv coordinates for you normal map in the this channel. Just take a look at the normal mapping example all you have to change is to rename uv_Bump$$anonymous$$ap
to uv2_Bump$$anonymous$$ap
as you can read below the "Surface Shader input structure".
For the second solution only works with an atlas that is equally split into a grid. For example it contains 16 images in a 4x4 or 8x2 grid where each tile has the same size. There you would add global shader parameters to tell the shader the tile count in each direction. In this example 8 and 2. To get the UV for the normal map you just need to multiply the inco$$anonymous$$g uv coordinates by your tiling count and take the fractional part as UV for your normal map which will be between 0 and 1.
Bunny,
$$anonymous$$ay I call you that? Thanks for you input.
$$anonymous$$y concern about memory stems from what I learned about how Unity treats a material during run time when a property setter is used to change the offset, scale, color, etc on a shader -- in that it clones that material for the current object. I believe that will increase memory usage.
Still, I am intrigued by the second solution you proposed "... atlas that is equally split into a grid .."; try as I might to visualize that, I am struggling. Is there a known example of that method is use I can study, especially shader code?
Thanks, $$anonymous$$anny
@manny003: Please don't reply with an "Answer". There's an "add new comment" button below each answer / question. Answers should answer the question at the top. This is not a forum but a Q / A site ;)
Yes if you modify any parameter of a material the material is cloned / duplicated. But a material needs almost no memory at all. Adding a second uv channel will be probably more data.
If a material is duplicated it will still use the same texture(s). Textures are never uploaded twice to the GPUs VRA$$anonymous$$ unless you explicitly create a duplicate. As i said a material equals one draw call. If the material is duplicated (so you get a new material with the same or almost the same properties) it will result in a new drawcall.
If you're on mobile you want to $$anonymous$$imize the draw calls, but if that means you need unnecessary larg textures you have to find a balance between drawcalls and memory usage. Sometimes it might be better to have 2 drawcalls ins$$anonymous$$d of 1 but you save 50% of the used memory on the textures. This was just an abstract example ;)
I will modify one of my old samples and glue an example together to show how the second way works. I'll edit my answer above.
Bunny,
I just saw your example of "Atlas Tiling" (above). Thank you so much for taking the time to put it together. This is definitely a technique that I will be studying for the next days to come. You're the best. If only I can impose on you a little bit more: Could you make the Unity project file and assets available for download? I will be combing through it line by line to absorb all the knowledge. :)
Regarding your most recent comments above, ".. Adding a second uv channel will be probably more data." -- this would appear to be true. Zounds!
I just completed the refit of my mesh, shaders and texture and although I eli$$anonymous$$ated the draw calls I expected to (-4), it increased the run time memory consumption by > +25 $$anonymous$$B.
This is amazing as I figured the extra UV channel would just be another set of Vector2 arrays. $$anonymous$$ind you, my meshes are simple geometric objects like cubes and pyramids and the like -- nothing complex.
One last thing, about the material cloning, you stated:
"Textures are never uploaded twice to the GPUs VRA$$anonymous$$ unless you explicitly create a duplicate."
I get it. But what if I instantiated the same material over and over for the purposes of creating a material pool with offsets pre-computed. Does that count as explicitly creating a duplicate?
Example:
var original$$anonymous$$aterial : $$anonymous$$aterial = Resources.Load("$$anonymous$$aterials/$$anonymous$$y$$anonymous$$aterialAtlas", typeof($$anonymous$$aterial)) as $$anonymous$$aterial;
for (var i : int = 0; i < pool$$anonymous$$aterial.length; i++) {
pool$$anonymous$$aterial[i] = Instantiate(original$$anonymous$$aterial);
pool$$anonymous$$aterial[i].mainTextureOffset = new Vector2(nBlahX, nBlahY);
}
Thanks, $$anonymous$$anny
Your answer
Follow this Question
Related Questions
issues with mobile build 0 Answers
Assigning UV Map to model at runtime 0 Answers
Voxel texturing with single texture instead of texturesheet? 0 Answers
Can I feed ShaderGraph a UV map? 0 Answers