- Home /
Can you apply smoothing to split vertices on a generated mesh?
As far as I've gathered, Unity automatically applies smoothed normals to regular edges, and determines hard edges through split vertices. Is there a way to smooth edges with split vertices as well? In Blender this is possible, but my current problem would require me to specifically generate the mesh in Unity rather than importing it. Alternatively, is there a way to achieve the effect of split vertices (displaced uv) without actually splitting them?
.
For context: I'm working on a form of tiled terrain which can be depressed on runtime to create riverbeds and lakes. I've done the vertex displacement code, but it requires a mesh with ordered vertices, so I'm having to generate the mesh itself to begin with (rather than my default solution of importing it from Blender, since I haven't found an even remotely practical way to order vertices on Blender meshes).
.
The tiles are smoothed low-poly blocks with the top having one of 12 different terrain textures and the sides having a rocky cliff texture, with a smooth transition. Having split vertices at the transition point allows me to map all required textures into a compact 4x4 atlas, since I only need 3 adjacent texture slots for the cliff texture (for looping edge continuity). Without the splitting, I would need 4 cliff texture tiles around each terrain texture, for a total of 60 awkwardly fitting slots taking up a massive 16x16 atlas, if we go by the power of 2 for texture sizes. Also, the looping edges wouldn't work anywhere nearly as well because of the automatic anti-aliasing.
EDIT: I ended up with a somewhat crude but fairly performant solution, although it does add some surplus vertices so it's not ideal. I duplicated the vertices at the seam, displaced them slightly down the side face and duplicated them again, like so:
for (int i = 0; i < line; i++)
{
v[v.Length - line * 2 + i] = v[i] * .99f + v[v.Length - line + i] * .01f;
v[v.Length - line * 3 + i] = v[v.Length - line * 2 + i];
uv[uv.Length - line * 3 + i] = uv[i] - new Vector2(0, 0.0004f);
}
wherein "line" is the count of points along a single line of the tiletop's vertex grid, vertices at the beginning of the array are on that top side grid and at the end of the array are the bottom vertices of the sides (this is from a tile variant with only one visible side).
.
This way, the seam is located on a straight face, meaning the normals on the edge of both faces are equal by default. The further down the face you can push that seam, the smoother the normal transition becomes, but that's limited entirely by how wide your texture's seamless buffer is. And, again, it does add an extra row of vertices (and triangles) to the mesh, so this may be a poor solution. I've previously used this straight face concept to remove visible seams along the connecting point of the tiles, where it probably makes more sense; for this instance, however, the answer below looks far more universal and better.
.
Also, sorry for the periods. I can't for the life of me figure out how to make line breaks work in this.
Answer by BastianUrbach · Feb 14 at 09:12 AM
I don't think you can make Unity do it for you but it's still possible in the sense that you can set anything you like as vertex normals, so you can calculate smooth normals yourself. Smooth vs. flat normals is basically vertex vs. face normals. You can either have one normal per face and just use that directly or you can have one normal per (unique) vertex and interpolate the three distinct vertex normals across each triangle. In practice, even flat shading usually uses vertex normals but it uses split vertices so that each vertex only belongs to a single face.
The best normals are usually created by reasoning about the shape you want to represent with the mesh, e.g. the normals of a sphere always point away from the center. However, that's often impractical so they are commonly generated from the triangle mesh instead. To calculate a smooth vertex normal from a triangle mesh, you determine the face normal for each adjacent face and weight it by the angle between the two edges meeting at the vertex. Then you just sum these weighted normals up and normalize them.
Warning: the following code is slow. It's intended as a starting point and for illustrative purposes. For large meshes, you will probably have to use some kind of acceleration structure for merging (the nested for loop).
void SmoothNormals(Mesh mesh) {
var vertices = mesh.vertices;
var triangles = mesh.triangles;
var unmergedNormals = new Vector3[vertices.Length];
var mergedNormals = new Vector3[vertices.Length];
for (int i = 0; i < triangles.Length; i += 3) {
var i0 = triangles[i + 0];
var i1 = triangles[i + 1];
var i2 = triangles[i + 2];
var v0 = vertices[i0];
var v1 = vertices[i1];
var v2 = vertices[i2];
var normal = Vector3.Cross(v1 - v0, v2 - v0).normalized;
unmergedNormals[i0] += normal * Vector3.Angle(v1 - v0, v2 - v0);
unmergedNormals[i1] += normal * Vector3.Angle(v0 - v1, v2 - v1);
unmergedNormals[i2] += normal * Vector3.Angle(v0 - v2, v1 - v2);
}
for (int i = 0; i < vertices.Length; i++) {
for (int j = 0; j < vertices.Length; j++) {
if (vertices[i] == vertices[j]) {
mergedNormals[i] += unmergedNormals[j];
}
}
}
for (int i = 0; i < mergedNormals.Length; i++) {
mergedNormals[i] = mergedNormals[i].normalized;
}
mesh.normals = mergedNormals;
}
Edit: I should add, if you explicitly split vertices, a much faster way of performing the merging step that doesn't require any acceleration structure is to simply "remember" what unique vertex each vertex is a copy of. So basically, for each vertex, also store its index in the (fictional) list of unique, unsplit vertices.
Hey, thanks for the answer and sorry for the late response (something came up in life and I forgot about this question).
That's a very interesting method. I wasn't even aware before that you could set mesh normals manually as well, that's useful to know. Especially with the logic behind it explained and the code example, thank you!
In my case, the meshes handled are fairly large (625 vertices minimum on lowest graphics settings, up to tens, maybe hundreds of thousands on very detailed settings). They are, however, (currently) generated in the loading phase or when changing settings, so the performance aspect is important but not crucial as a slower process only increases load times. This would be an extra step from the simpler solution I ended up with (adding it to my OP after this) so even optimized it would likely be a bit slower, but probably still viable and producing a smoother end result. I'll see how it looks with my current solution once other graphical elements are finished, and try your method if it turns out to be insufficient.