- Home /
How can I draw an array of data (640x480) very fast ?
Hello all,
I have an array of values from a Cellular Automata simulation (640x480 grid) that I would like to draw straight to screen. My understanding is that these are the only options available:
- Make a Texture2D on every frame then draw it as a GUI texture and unfortunately take a bad hit on the framerate because of Apply().
- Draw it as particles but that many particles (307200) would kick the framerate in the face.
- Draw it as openGl calls but there's no GL_POINTS and it would be slow anyway.
- Draw it as an Image Effect/Graphics.blit but passing the data to the shader would have to also be as a Texture2D so same problem with Apply().
So... how can I draw/blit all that 2D pixel data the fastest way possible?
ANY trick, option or alternative would be most welcome. Thanks in advance.
Answer by Jens Fursund · May 08, 2010 at 04:16 PM
A little more advanced, but you could also rewrite the cellular automata simulation to evaluate inside a shader. This would be done by applying the shader on a fullscreen quad. If you need information about the previous state of the cells when evaluating, you could this by making two rendertexture's and ping-pong between the two. Something like this:
- Set rendertextureOne active
- Set rendertextureTwo as a texture in the material with the cellular automata shader
- Draw fullscreen quad with the cellular automata material
- Use information from rendertextureTwo to calculate next step
---- Next frame ----
- Set rendertextureTwo active
- Set rendertextureOne as a texture in the material with the cellular automata shader
- Draw fullscreen quad with the cellular automata material
- Use information from rendertextureOne to calculate next step
If you can't fit all the information into one texture, you can always use more of these ping-pong's. This technique would definitely speed up your calculations quite a bit, given they are applicable to this kind of parallel processing.
A double buffered shader using render textures to avoid Apply(). How cool. Would you have an example of this being used by any chance ?
Answer by Magnus Wolffelt · Aug 17, 2010 at 12:01 PM
It is actually possible, on OSX, to blit textures very fast by having a plugin access the OpenGL texture "handle", and write the data directly from there.
This has been used successfully for putting high-resolution quicktime movies into Unity.
The big issue is that it doesn't work on Windows, because Unity is using Direct3D there, and it appears to be quite complex to update D3D textures from plugins. Any ideas or pointers welcome.
Update: you can force Windows to use OpenGL with a command-line param
Answer by Eric5h5 · May 07, 2010 at 04:27 PM
You could divide that into a grid of smaller textures, and then update only the textures that actually change.
So you reckon that there's no faster way to blit than having to go through a texture Update?
Answer by N1nja · May 22, 2010 at 04:39 PM
Write a shader!
good answer! "$$anonymous$$ASSIVE SARCAS$$anonymous$$ QUOTES"
Answer by Mike 3 · May 07, 2010 at 02:22 PM
i'd just use your first idea and update the texture every 0.1s or so in a coroutine - it should still look plenty good enough without killing your frame rate
Hi $$anonymous$$ike, thanks for your reply. Unfortunately a coroutine is not a thread and it would actually block the graphics pipeline as the texture needs to be uploaded. Also 0.1s would only make 10fps and the point of my question was to get a maximum framerate.
Indeed - you can't Apply from a thread without thread errors being spammed back at you. If the texture is the only thing you're interested in drawing, then yes, 10 fps probably won't be much good; i assumed you were trying to keep a decent framerate for other things while you drew this :)
Your answer
Follow this Question
Related Questions
2D car track method 1 Answer
Draw to Texture Without Pro? (perhaps something on the asset store)? 2 Answers
Unity 5.6 Creating a Distortion in Sprites? 1 Answer
Why do I get those green lines? 0 Answers