- Home /
Trying to convert slow code to compute shader
I have a Dictionary with a vector3 as the key and a custom class as the value (stores pos/rot/type/etc) Then I have a function that starts a spider that crawls through the whole 3d array, BUT it starts at say (0,0,0) and can only index the dict with up/down/left/right/forward/back relative to its position then in moves to each of the positions it found and re-runs itself for each one eg it found up and left runs up (0,1,0) and can only index the dict with up/down/left/right/forward/back relative to its position runs left(-1,0,0) and can only index the dict with up/down/left/right/forward/back relative to its position Then when the spider finds all the neiboring blocks it checks to see if that is all the blocks if so it does nothing else it cuts out the ones it found and makes a new dict for them, then re-runs the spider in some of the blocks it hasnt found yet
thus allowing the user to build in the dict and if they cut a wall or car in half, it will fall into 2 because of the spiders
This is quite slow for dicts with more than 1K blocks I am wondering how can a make code in a compute shader to run multiple spiders at once I think running like 1K spiders or the number of blocks in the dict (if its less than 1K) but I have no idea how to do this, or how to make it a callable function in my c# code
Basically, how do I make a compute shader that loops through all the elements in a dict at once and calculates if they should break off from the main dict
Answer by Captain_Pineapple · Sep 10, 2021 at 02:58 PM
basically: Don't.
This isn't something that can be done efficiently with a shader. A (compute) shader is good for processing lots of similar data simultaniously in the same way. But it is not good to process sequential data as in Step 1 do stuff and process the result (which may be for example 1 or 10 elements) again until results are empty.
Instead you should try to analyse and optimise your code. Perhaps add the relevant code for your search so we can check how this can be improved upon. Imo Dictionaries do not scale goo with size so if you have a way to replace that with a more fitting datastructure this could already bring you quite some performance.
Fair point, i mixed that up with Lists vs Hashsets. Edited my main post to correct that one.