- Home /
Talking across too many objects causes big lagg?
I'm playing with threading and coroutines for a voxel chunk generation machine. I at one point had a master script that was storing all of the world data in Dictionary<string, CubeType[,,]> like structures and then the chunks themselves would ask for this data when they needed it.
In short, running through a 20x20x20 chunk and asking the parent transforms script "gimme what kind of block is at 5,5,5" 8000 times was posting around a 1.3second lagg on my not so awesome computer.
I changed the master script to calculate then assign the chunk its own copy of data for use and sure enough its down to 0.033seconds again.
There are certainly other small factors here and there, but I noticed a huge drop in performance as soon as I setup the master handler script. Is this something to avoid in the future whenever possible? Some people are so livid about OOP and making lots of a little scripts to do everything. Not when lots of small calculations and data transfers are involved though?
I'm not quite sure what answer can be given, as I'm not quite sure what your question is... you should write a script that achieves the result you want, with a performance that's acceptable to you (or, more precisely, your users). That's it.
This isn't a good format for "Answers", but in summation, searching a large collection 8000 times will be slower that having a direct reference to data you need automatically. So yes, not having the master script seems the better approach.
@Tanoshimi, I was asking if heavily used data should be kept within the script using it or if it shouldn't matter where the data is for future reference. Of course performance is an issue in all applications and if I can know ahead of time that one approach is much faster than another then I will use that format as much as possible.
Answer by malkere · May 14, 2015 at 03:53 AM
No one had a clear answer though Mike's logic is certainly sound. The assumption is that the data does not already exist, it needs to be created. Does creating that via multiple scripts affect speed rather than using only one script to run all the methods.
I setup a test:
a 50 x 50 one-sided (top only) voxel array generates 51x51 vertices. It then generates the mesh and updates it.
Approach 1: 100 times the vertex calculations are run by calling a GetNoise function within the mesh generating script. The mesh is then generated and applied.
Approach 2: 100 times the vertex calculations are run by calling a GetNoise function on a second script attached to the same transform. The mesh is then generated and applied.
Conclusion: approach 2, calling the GetNoise function (written exactly the same) on a second script, adds 0.4s to the processing time on my cheap little laptop. So there is clearly a difference in moving off object/off script and back. Certainly not much, but in the original post I was running many tens of thousands of calls between scripts and it was very clearly slower.
Answer by toddisarockstar · May 14, 2015 at 04:48 AM
i/we cant give definite answers before seeing and understanding exactly what you are trying to do with your project and what compromises can be made.
Obviously running separate scripts on a billion objects is a big NO.
there are several creative Technics developers have used to get around these sort of problems and accomplish big stuff.
For Example: scripts that only run a certain distance from the camera....yay
Remember, scripts in update loop though your masterscript i guess about 60 times a second.
an obvious answer would be to limit how many times per second your master script needs to do whatever it is doing.
super simple javascript code to limit your master code to calculate only every third frame or 20 times a second:
var slow:int;
function Update (){
slow = slow -1;
if (slow<1){slow=3;
//put all the real heavy script here to cut pcu usage to a third
}}
now you should preform 3 times faster. and your users hopefully wont notice 60* per second vs 20.
tanaoshima is exactly right. you got to look at the big picture first and create a creative plan to arrange your game to accomplish whatever you are doing.
Even better, depending on how many pieces you're working with, divide them into groups and perform calculations on them in waves, scaled (if necessary or applicable) by distance from player and/or types of changes made to them.
Thank you for the follow-up.
This is all based on a voxel engine, where 20x20x20 chunks are checking their cube values so they can generate their individual mesh. In an approach to centralize data for when a chunk wants to know what it's neighbors cube values are for deter$$anonymous$$ing if it needs to render a specfic side of itself or not I tried putting all of my data into dictionaries on my "master" script. Initially I was trying to make all 8000 calls for each cube in each chunk asking for an answer from the master script. This was only going off once per spawn, then it was setup and fine, but was extremely slower than when the chunks ignored one another and just rendered their own data.
Later I moved calculation of the data into threading on the master script and then just feeding the whole finished array into the chunk for it's own use. I'm still not sure the best way to go about handling all the world data as the player moves around when sometimes the data hasn't been made yet, sometimes the data has been saved but is not currently being used, etc. etc. I'll just keep at it though.
Learned an important lesson about script interaction.