- Home /
Why is the function "Destroy" taking so much time
I'm seeing a vast amount of time disappearing in the function "Destroy" when profiling my game on an iPad1 (it's also a factor on an iPad2 grab but orders of magnitude less painful). At the point I was recording I was doing a lot of procedural mesh creation and so destroying last frame's mesh each update. Most likely it's related to that?
Just wondering if anyone can shed any light on what exactly could be going on, i.e what could be being destroyed that would take so long (and why)? Most importantly is there anything I can do to eliminate or lessen the expense?
Hopefully you can see a screen-grab from the profiler below:
Thanks!
Answer by Trepan · Aug 26, 2011 at 06:57 AM
Ok, so I've reworked my procedural mesh stuff to re-use the same mesh rather than calling Mesh.Destroy and creating a new one each time and that seems to have brought the "Destroy" time down considerably. I guess the take-away lesson: destroying stuff is slow, avoid.
Certainly it would be unusual in any game to be creating or Destroy
ing a GameObject or a Component every frame.
Be sure to watch this video: http://www.youtube.com/watch?v=IX041ZvgQ$$anonymous$$E even if you don't use C# you can use the object pooling script he writes, which is pretty awesome.
Thanks for the comments - yes, I'm very familiar with pooling algorithms. In this case the mesh I'm rebuilding is a special 'one per frame' kind of deal whose parameters can vary wildly. $$anonymous$$ost likely I will refactor to improve my reuse of existing assets because no, I don't feel at all comfortable with a destroy/create cycle, but perhaps that leads me to a follow up question...
I'm from a long console development background and I'm very used to handling memory allocation super carefully - allocating multiple small objects in a single block, reusing memory, placement new, etc. Unfortunately all of those techniques seem to be hard, if not impossible to apply in the managed Unity environment. Are there any tricks that can allow you to do things like allocating a big array of vectors that you can manage yourself, i.e. passing on subsets of that array to other (child) objects? So far I've been just 'going with it' and letting GC do its thing, but this is against the grain, and cases like the one that prompted this thread add to my unease. As a specific challenge: when building a mesh procedurally, and when vert/tri counts will be changing dynamically, is it even possible to avoid reallocating at least the 'triangles' array (since the length of that array is the only way a mesh knows how many tris it contains)?!
Perhaps the answer to reallocating the triangles array is to allocate a maximum sized array and then fill unused space with degenerate tris? ...I think I'll give that a go when I have a chance.
There isn't a lot you can do about arrays - Unity copies them every time they pass into or out of a setter/getter in the Unity API. Best you can do is $$anonymous$$imise other copies. $$anonymous$$aybe using arrays of same size will trigger optimisations inside Unity.
Answer by jampoz · Aug 26, 2011 at 07:29 AM
That's called Object Pooling, isn't it?
Actually, it's not. Object pooling is where you collect no-longer-used objects and reuse them next time one is needed. Trepan is doing better: reusing the object immediately, so not having the overhead of managing the pool.
I know you're joking, but it's exactly that sort of clean generic thinking that probably created the problem in the first place. There is a joke that whenever a programmer needs boiled water, he first empties the kettle in order to start from a known initial state.
Your answer
Follow this Question
Related Questions
Destroy on touch 1 Answer
What is causing "Destroy" to take so much time when entering play mode? 1 Answer
Unity Profiler - Physics optimization (FPS drop) 0 Answers
Wy does this destroy script not work? 2 Answers
Purpose of the Destroy Function 1 Answer