- Home /
Multiple Textures and their impact on resources
I'm at a bit of a crossroads here. Let me explain. We are making a virtual trainer for electrical connectors in a helicopter. There are hundreds of components that use hundreds of different types of connectors (different pin counts, layouts, etc.). There are two aspects to this trainer, which is the actual 3d model which involves removing the wire connector from the component plug and then we switch to a 2D window overlay view to actual test the connectors/probe the connectors.
Because there could be over a hundred pins on a single connector, we use a higher res image in the 2d window to show the connector face and allow the student to probe the pins using a multimeter. For the 2d window, I am using the original 256 map which has a single connector face on it.
In the 3d world we are using a much lower res version of the connector. Basically, for the 3d world I have a 1024 map with dozens and dozens of connector faces. The actual face can be clearly seen, but is not clear enough to be blown up for probing.
So my question is, is it better to continue the flow I am using or would it be better to just use the higher res single connector face textures throughout the 3d world and get rid of the 1024 multi-connector texture?
I figured that since in the 2d window you can only have 2 connectors showing at a time and in the 3d world you could have dozens of connectors showing at any given time, my current workflow would be better for overall game resources. The programmer thought that maybe since the higher res images were in the world anyway, even though they are culled until he calls them up in the window view, the impact on performance could be the same. My thought was that if the student has disconnected dozens of connectors, the game would then need to call dozens of individual maps to render all those different connectors vs just calling 1 or 2 maps to render the same scene.
We haven't had a chance to test it, but before we go and redo hours worth of work for the sake of a test, we thought we could pose the question to the community and see what the consensus is.
Answer by Owen-Reynolds · Mar 24, 2012 at 04:43 PM
All textures in the scene are loaded in general memory. BUT, each texture currently being displayed is also loaded into texture memory (of the graphics card.) This is generally the bottleneck. In Unity: Play, select "Stats" and look at UsedTextures. You'll see that it goes up and down wildly as you move around.
If you look at some tiny 4x4 pixel objects using a 4meg texture, you'll see usedTextures jump by 4 megs. When you look away, you'll see it drop back down.
So, what you are doing now is mostly correct. A used texture is something that wasn't viewPort or backface culled -- the high-res images hanging around off-camera aren't causing a problem. You've basically invented LOD (Level of Detail) which is exactly that trick -- swap in your own lower-rez texture when you know it will look fine.
What happens is standard operating system stuff. The graphics card would love to have all textures (and models) for the frame loaded into its own memory. It can't actually read a texture from main memory. Instead, it "bumps" a current texture and loads the new one. With luck (or planning,) it bumped a textured it had already used this frame, but maybe not. So, it has to reload it and bump something else. Worst case, you manage to accidentally always "bump" the next texture you were going to use (known as thrashing.)
So, as "texture amount used in one frame" goes up to the size of graphics card memory, no frame rate change. As soon as you go above that, you could get a sudden, large frame-rate drop. Then it just gets gradually worse as you add more.
That makes sense. So they are already loaded in memory, but not texture memory. I turned on stats and brought the 2d windows/menus up and watched the texture memory jump a little, then drop after I closed the window.
So I guess the best idea would be to keep doing it the way I am doing it now.
I was noticing that even if I turn away from all of the objects in the scene, so I am looking at empty space, the texture count did not drop. Is that because those are all stored in cache? It did go up and down as I pan the camera around and new objects come into view. However, it never dropped below 25 when staring off into emptiness.
Old textures mostly stay in the cache until someone bumps them, but I think texturesUsed only counts the ones Unity thinks you are using.
For me, I get three when I face away -- the terrain has 3 "paint" textures (it even loads the ones I can't see right now.) When I jump off the edge, I go down to 2(?) When I disable the script with OnGUI (which wasn't printing anything) it drops to one 64 byte texture.
So, Unity probably thinks there really are 25 "hot" textures (skybox? fonts? render texture? $$anonymous$$imap?)
Ahhhh, yes, I completely forgot about the $$anonymous$$imap and a few of the on screen icons. Once I started turning those off, the texture count started to drop. Those weren't registering in my brain as part of the 3d world, so I was just ignoring them. Duh.
Thanks for the help.
Answer by rutter · Mar 23, 2012 at 05:55 PM
It should be pretty obvious that higher resolution textures will have a larger memory footprint. Bear in mind that this relationship is exponential (doubling a square texture's resolution will quadruple its size in memory).
As far as I understand it, larger textures introduce two main concerns:
They take longer to load
They require more memory
Presumably all of those textures will be loaded into memory at some point, regardless of whether or not they're currently displayed. In that sense, using fewer textures may actually reduce your memory footprint.
If you're rendering on a GPU with lots of memory to cache your textures, I wouldn't anticipate any major framerate concern with texture size until you hit a point where the cache runs out of space. That would be a Bad Thing. Swapping cache is extremely expensive in general, will probably scale at least linearly per swap, and will swap more often as your memory starvation problems get worse.
If you're rendering on integrated graphics, the simplest explanation is that you'll have about the same problem but a much smaller cache. Adding insult to injury, you are probably sharing both memory and memory bus directly with the CPU, which makes all of the above problems even worse.
Probably someone who specializes in these things could give you a much better explanation, but that's what I've got for now. ;)
So then, it doesn't matter whether they are being displayed or not? They are taking up memory once they are loaded, so having the larger texture with multiple connector faces for "in world" is just taking up more memory.
The game performance shouldn't take a hit if it is displaying 50 textures or 2 textures because they are already loaded into memory either way?
I was under the impression that displaying fewer textures was better.
Your answer
Follow this Question
Related Questions
Calling Resources.Load on a Texture is returning null 1 Answer
Assets aren't found at runtime 1 Answer
Make reference to the project texture file in the material field 1 Answer
Loading textures into an array after build 1 Answer
What's the best way to incorporate dynamically loaded textures into GUI controls? 1 Answer