- Home /
GPU Generated Mesh?
Hello!I was wondering : is it possible to generate meshes just using shaders?Or deform already generated meshes with their wireframe?Can you calculate perlin noise using them?Thanks in advance!Any anwser is greatly apriciated!
Answer by CHPedersen · Oct 12, 2011 at 06:46 AM
It is possible to generate a mesh just using Shaders, but not from nothing. Here's why (simplified version):
Shaders are programs that are executed on key positions in the graphics pipeline. The pipeline is a series of steps on the graphics card through which all geometry goes. Geometry starts out as vertices ordered into meshes (as you know) and the first steps involves "primitive assembly", that is, the assembly of vertices into the polygons they're a part of. After this point, the execution of a vertex shader takes place that optionally transforms each vertex further by changing its color, position, UV coordinates, or a wide range of other things.
When the vertex shader has finished executing on every vertex of a model, the model is rasterized into screen-space candicates for pixel updates to the frame buffer, the so-called "fragments" of the model. The reason they're not pixels yet but only candidates is that they might not get written to the frame buffer (they might fail a depth-test, for instance, because they're obscured by some other object). This is where the fragment shader kicks in - just like the vertex shader operates on every vertex of a model, the fragment shader operates on every fragment of the rasterized model, after it has "become pixels", to put it bluntly.
But the fragments passed to the fragment shader do not necessarily have to stem from just the geometry that originally entered the graphics card (i.e. a mesh generated on the CPU side). After DirectX10/OpenGL 3.2, a type of shader that can create new geometry at the GPU level was introduced to the pipeline. This is the so called "Geometry Shader", and its execution takes place after the vertex shader. Its input is the whole primitive, that is, the whole mesh currently being rendered, and it can generate new geometry based on the input, so it effectively changes what is rasterized into fragments at the GPU level. That's the key point in the answer to your question: Yes, you can generate mesh on the graphics card, but it requires that you write a Geometry Shader, and as far as I know, that you pass something to it to begin with, I don't think it executes unless something is already passing through the pipeline.
You can't generate new geometry with just vertex and fragment programs, because as explained, they operate on vertices that are sent through the pipeline from the CPU side, and the fragments that result from the rasterization of these vertices.
Your other two questions are easy. :)
Yes, you can deform already generated meshes by their wireframe, that is what the vertex shader does when you change the position of the vertices passed to it.
Yes, you can calculate perlin noise using shaders, see this link for an example. There are other examples if you google for it a bit.
Thank you verrrry much for your answers!I've been googleing quite a lot,but i haven't found any good and precise answers as yours!Thank you! Just one more question :
In the following example,it doesn't actually change the wireframe too...so if i add a $$anonymous$$eshCollider,for instance,it will collide with the old mesh...Is this a wrong example?Would i have to use that thing with o.position?Sorry,but i am new to shader scripting :)
Shader "Example/Normal Extrusion" {
Properties {
_$$anonymous$$ainTex ("Texture", 2D) = "white" {}
_Amount ("Extrusion Amount", Range(-1,1)) = 0.5
}
SubShader {
Tags { "RenderType" = "Opaque" }
CGPROGRA$$anonymous$$
#pragma surface surf Lambert vertex:vert
struct Input {
float2 uv_$$anonymous$$ainTex;
};
float _Amount;
void vert (inout appdata_full v) {
v.vertex.xyz += v.normal * _Amount;
}
sampler2D _$$anonymous$$ainTex;
void surf (Input IN, inout SurfaceOutput o) {
o.Albedo = tex2D (_$$anonymous$$ainTex, IN.uv_$$anonymous$$ainTex).rgb;
}
ENDCG
}
Fallback "Diffuse"
}
Thanks again!
To be perfectly honest with you, the reason I know how the pipeline works at a basic, theoretical level is because I'm studying it myself these days. ;) I'm not that great at GPU program$$anonymous$$g either, I'm just reading through nvidia's own book on Cg, available online here: http://developer.nvidia.com/node/76 The first few chapters explain about the pipeline. But I think I can answer your question anyways. :)
You're right that extruding vertices at the GPU level will leave the $$anonymous$$eshCollider unchanged. This is because of the boundary between the CPU and the GPU in terms of data availability; data that appears or gets calculated in the GPU side exist only on the graphics card's video memory, it isn't available to the CPU, so once you've sent something for processing there, there's no going back. This means your $$anonymous$$eshCollider (a CPU thing) has no knowledge of what the graphics card is doing to your poor vertices at the GPU side. Therefore, it stays the same shape and size as the CPU-side $$anonymous$$esh it is attached to.
There are workarounds to get this to work anyway, of course. The value you're extruding your mesh by is set on the CPU side (it's the $$anonymous$$aterial property called "_Amount" in that example). So, our goal is to be able to change the size of the mesh collider by the same relative amount as the GPU is going to change the actual $$anonymous$$esh in the vertex shader. I don't think you can change the size of a $$anonymous$$eshCollider because it depends on the mesh, but you can change the size of a Primitive-based collider (capsule, box, etc.) just fine, so perhaps you can approximate the collider with a set of primitive colliders ins$$anonymous$$d? Sort of like what they do to the $$anonymous$$4A1 on this page:
http://unity3d.com/support/documentation/Components/class-BoxCollider.html
If you really must use a $$anonymous$$eshCollider, you could have two meshes, copies of each other, in exactly the same position, attach the collider to one of them, and disable its renderer. Then you can render one and mess with it on the GPU, and use the value "_Amount" to resize the other, invisible one, causing its $$anonymous$$eshCollider to resize itself with it.
As an added note, just for the sake of correctness: I mentioned in my answer that the vertex shader executes after primitive assembly. It doesn't. Primitive assembly comes after vertex shading. :)
Sorry for the late reply. Yes,but what if i deform it using Perlin noise?How would i pass that to the CPU?Each vertex has another value and i really need to avoid calculating anything expensive on the CPU : So i would need to get their new positions calculated on the GPU and then store them into an array (i have no ideea how to do this tho :[ ).Is it even possbile?I want to generate terrains,so i unfortunately i can't use Box Colliders and such...Thanks for your help,once again!
If this is for terrain generation, you should be fine calculating this on the CPU, I think? Unless the terrain's vertices get morphed and move around all the time, a terrain is something static that it's ok to have the CPU pre-calculate before starting the game, I should think.
If I have misunderstood, and this is something else about the terrain that changes all the time and must take place in real time, then I think what you want is a displacement map. It's mentioned in this article, too:
http://www.ozone3d.net/tutorials/vertex_displacement_mapping_p03.php
It's basically a texture but, ins$$anonymous$$d of the fragment shader sampling it to deter$$anonymous$$e the color of a fragment, the vertex shader does lookups in it to deter$$anonymous$$e how to change the position of vertices. Is that close to what you're looking for?
Answer by MountDoomTeam · Sep 03, 2013 at 03:29 AM
here is an example of mesh generated using DX 11 ... including some source code
Your blog needs authentication now. Perhaps it did not when you posted this, but anyhow I am very interested to see the source codes generating mesh with DX 11 ...
$$anonymous$$ountDoomTeam was last online on $$anonymous$$arch 08, so it's unlikely to get a response any time soon. $$anonymous$$aybe he has email notification enabled...
Answer by Jeff-Kesselman · Jul 27, 2015 at 03:20 AM
Actually with DX11 it is now possible to create all geometry on the GPU. We create a vertex buffer with a Compute Shader and push it into the render pipeline.
And if you use a ComputeShader, you can pull the resulting mesh back from the GPU to create or modify a mesh collider with it. Be forewarned however that pushing data to and from the GPU is expensive.
Your answer
Follow this Question
Related Questions
Have part of mesh change color 0 Answers
Dymanic Mesh Hiding 2 Answers
2D Sprite Shader Acting like Light 0 Answers
texture/mesh/shader problem on Android 3 Answers
wood embers shader/texture 0 Answers