- Home /
Why does allocation heavy code not scale to multiple threads?
I've been trying to optimise an algorithm using worker threads, and I've found that I get very poor performance when the code does a lot of allocations. Using the same job manager algorithm, I get roughly linear scaling of an algorithm without allocations, but with allocations it's pretty much a constant time regardless of thread count (i.e. each iteration takes 4x as long when using 4 threads). The code I am trying to optimise runs in the editor, and I don't have the profiler window open when measuring (using Stopwatch).
I would expect better performance from the stock Mono heap, or the ms .net heap.
My theory is that Unity has some sort of allocation hooks that are causing the code to serialise (e.g. locking a buffer for profiling).
Can anyone with knowledge of the internals shed some light on this? Has anyone else noticed the same thing?
FYI check out Loom on the asset store
http://www.youtube.com/watch?v=k$$anonymous$$0$$anonymous$$ubh0CWA
it's only a few bucks, even
Your answer
