- Home /
How to use native code for GPU-calculations?
I've written a shader that can calculate a heightmap for the terrain i'm creating (using different noises such as Perlin and Worley), which fills up a 2-dimensional array of floats. I've got that working now, but in order to get the heightmap back to the CPU, I need encode it, render it to a texture, send it to the CPU and decode it again, which is very inefficient, as it is slow and requires 4x as much GPU-memory as needed (and loses a lot of precision). For an overview of the method used, I'd like to point you to this site, which explains the method that I'm using (mine is slightly modified, but the principle is the same).
However, it's apparently possible to get a pointer (using GetNativeTexturePtr() or GetNativeTextureHandle()) to the two-dimensional array on the gpu (which is essentially a texture) and use that to get the entire array using native code (such as c++). This is the only information I could find about it, and it is not clear at all.
In short, my plan is to get information (floats) from the GPU (from a RenderTexture) using native code, do any necessary conversions in the native code (to get a 2-dimensional array of floats), and then pass the entire 2-dimensional array back to the managed code (C# in my case). Does anyone have any directions or example code for this?
Your answer
Follow this Question
Related Questions
Performance between C# and C++ dll 0 Answers
arm64 libraries throw warning "Cannot auto detect architecture for Android plugin" 1 Answer
Plugins being forced to Native even though they are managed 1 Answer
Native keyboard height and hide the input field 2 Answers
Accessing config file from C/C++ android native plugin 1 Answer