- Home /
Is it possible to do procedural calculations in shaders?
Hello,
I'm doing procedural generation for a texture of an earth-like planet. It's basically simple diamond-square with some perlin for polar caps and a lot of transformations on top of it, generating a 2048x1536px cubemap texture plus a normal map of the same size. I'm happy with the outcome, but not so happy about performance: It's taking 5-6s per planet, and since i'd like to have more than one, this is a problem. I already did some optimization, bringing calculation time down from about 15s. Some more could be done, but it would only be chipping away tiny fractions.
I've done nothing with shaders so far, so my question is on a basic level: Can some or all of that work be done by shaders? If so, will this improve performance (since gpu is doing the work)?
if anybody could point me in the right direction, i'd be really glad. thank you.
Does the texture have to be different each time / are you generating different textures for different planets? Basically just wondering what is stopping you from generating it once, offline. To partially answer your question though, yes using the GPU would help.
Answer by MakeCodeNow · Feb 19, 2014 at 03:54 PM
You can definitely do this on the GPU and it will be very, very fast. However, you'll probably want to have Unity Pro so that you can render into and read out of RenderTargets. A high level algorithm would look like this:
Convert your current code into a shader. Create a dedicated RenderTarget. Create a dedicated camera and point it at the RenderTarget. Render a full screen quad with your shader from that camera. Create a new Texture2D. Copy the pixels out of your RenderTarget into the new Texture2D.
Thank you for your answer. Sadly, going Pro is not an option atm. Can it be done without pro features?
You can also use Texture2D.ReadPixels to make a texture out of the last screen render. You can probably get that to work, and I would expect it to still be faster than the current approach, through I'm not sure how much.
PS - please mark the question as answered if/when you think it is :)
I'd like to ask one more follow up question, if I may.
the way you describe it (to first draw the texture out to a RenderTarget or the screen and then to grab it and feed it back into a RenderTexture or Texture2D) seems odd to me. Can't it be stored in an array by the shader and accessed on the fly. What is the rationale behind this? If you could elaborate on that point a bit more, I'd be glad.
Sorry for being so persistent, I've been consulting the mighty google about this, but it seems there are not a lot of ressources on this / I cant' find them, and I've never worked with shaders before, so I'm trying to get the basics right.
$$anonymous$$y assumption is that once generated, this data is static. If so, you definitely want to copy it into CPU memory and/or a texture. If the result is dynamic and changes every frame, then you can just put the shader on your planet sphere, but that didn't sound like what you were doing in the CPU version.
In any case, shaders aren't like CPU code when it comes to memory. They can't allocate data and can only access whatever state (textures, constants, etc) is setup by the CPU when they run.
Ah, I see, that makes sense. Thank you very much.
Now for some learning.