- Home /
What is the order/connection of array elements in mesh.vertices?
I'm simulating mesh subdivision. But I only have to know the position of each mesh vertex, to interpolate between each vertex & see if the distance is big enough to create a new (temporary) vertex, which is added to listOfSubdivisionVertices
, and then returned to my algorithm.
But I recently found that mesh.vertices
contains 24 elements (thread for that) for a regular 3D cube with 8 vertices. So 16 (normals + UVs) of those vertices are unnecessary data & wasted computation time, but I couldn't find anywhere what the order of these vertex arrays are.
Like:
{
vertex1
normal1
uv1
vertex2
normal2
uv2
etc
}
or:
{
vertex1
vertex2
vertex3
normal1
normal2
normal3
uv1
uv2
uv3
}
so: is it the first combination of elements (single vertex data grouped together) or the 2nd combination (array of mesh vertex vector3, array of normal vector3 & array of UV vector3)?
Answer by Bunny83 · May 20, 2021 at 02:49 PM
Well, almost all you have concluded so far is wrong ^^. The vertices array only contains positions. It does not contain normals or uv information. Those are stored in seperate arrays. Note that this layout is only for scripting usage. The actual vertex format used by the GPU may look completely different.
So a single vertex is made up of a position (from the vertices array), a normal (from the normals array) and a UV coordinate (from the uv array). They are grouped by the same index. So index 0 in the vertices array belongs to the index 0 in the normals and the uv array ( or any other additional vertex attribute like secondary uv, tangents, colors, ...). That's how a vertex is actually defined inside the Mesh class.
How the vertices are actually ordered is not specified anywhere. They could be in any order. It doesn't really matter in which order they are since the triangles array will actually form triangles by providing 3 vertex indices which that triangle should be made of.
The "unnecessary data" is not really unnecessary. A cube has 6 sides and therefore each side needs distinct normal vectors. Since the normal vectors are defined at the vertex, you need 3 different versions of the same vertex position since at each corner 3 different faces meet. So you have 3*8 vertices or 24. Another way to look at it is that a cube is made up of 6 unrelated quad meshes. Each quad has 4 vertices. Those 4 vertices will all have the same normal vector in order for the face to appear "flat". Now you simply have 6 faces so you need 6 * 4 vertices == 24.
Just to be clear about that: Where a certain vertex is located in the vertices array is completely irrelevant as long as the triangles reference the correct vertices. So a triangle may use vertex #0, #5 and #11 while another triangle may use vertex #2, #3, #4. You could even jumble all vertices up as long as you also adjust the triangles array to still reference the correct vertices, the mesh would not change at all. The vertices array is litterally just a bucket of vertices. The triangles array actually introduces order into the mix.
If you want to examine the vertices and triangles of a Mesh in Unity, you can use my UVViewer editor window. Once you have it in an "editor" folder you can open the editor window through the menu. Now just select any gameobject with a mesh renderer in the scene and you can view the UV map(s) of that mesh. I've also added a triangle list view which may be useful here. Here's the result in Unity 2020.2.3f1 (the version I currently had at hand).
As you can see the first triangle is made up of vertex (0, 2, 3) and the second triangle is made up of vertex (0,3,1). Therefore vertex 0,1,2 and 3 belong to one face of the cube since those two triangles actually share two vertices. The next face is made up of the vertices 4,5,8,9, the next one of 6,7,10,11.
Keep in mind that this is not somehow a fix rule. That order may even change from Unity version to Unity version. In fact the default sphere mesh has changed several times over the years because they changed the UV map and where the seams are. They have done this to get better lightmap UV coordinates. So you should never rely on a certain vertex order for whatever you want to do.
A long time ago I made this MeshHelper class which provices a 2 way and 3 way subdivide method. However the Mesh class has changed quite a bit since then. Back then we only had 16 bit index buffers. Also the support for more UV channels as well as different mesh topologies was added later as well which isn't considered by my class. However it should still work for the default cube and simple triangle meshes.
Thanks for your quick & extensive answer!
First off, I'll explain why I have to subdivide (or rather, fake it - explained down below) a mesh.
I'm currently working on a scene-voxelization tool, which is going to be used for A* pathfinding. I'll write down my process on writing that algorithm first;
Divide an area with given 3 dimensions into voxels with a given voxelSize. To check if any of these voxels is traversable or not, I have to check if there are any meshes in the voxels. So, iterate over every single voxel (is probably not the most performant, but that's honestly not the main priority) to see if there are any mesh vertices inside the voxel. How? An empty gameObject with a BoxCollider with a size of voxelSize is placed at currentVoxel.VoxelPosition
, and uses collider.bounds.Contains(vertex)
to check voxel/vertex overlap.
If true, it's marked as "collider", and therefore not traversable. All this works just fine for cubes with a scale that's less or equal to the voxelSize
.
You'll see (tool demonstration with bugs) what happens when it's larger than voxelSize
: because it's stretched, the "longer" faces do not have any vertices, at least none that are in a voxel besides the corners -> are not marked as non-traversable -> breaks the algorithm because the AI has the possibility to move through the mesh.
So my (at least partly) incorrect train of thought was to make sure there is a vertex in the voxel when it's supposed to be, using subdivision. The subdivision wouldn't actually be applied to the mesh, as that's not necessary. Just need that subdivided mesh data. All this is, of course, being calculated outside of runtime & saved to a ScriptableObject that the AI can use during runtime to deter$$anonymous$$e a valid path.
The same logic should apply for MeshColliders (which is of course what a large part of the scene will exist out of), since I should be able to interpolate between those vertices as well.
Although, while writing this, I'm not sure anymore if the (partial) algorithm I constructed would work like I had in $$anonymous$$d or if I'm just thinking in a completely wrong way..
Anyway, thanks again a lot for your extensive & helpful answer! At least I'm not wasting any more time by going down the wrong road and continuing with my own "solution" and ending up with a tool that doesn't work (& never will because the logic is just not in line with how it actually works internally).
Answer by Meijer · May 20, 2021 at 02:44 PM
[ SOLVED ]
EDIT: new accepted answer, keeping this one (my own) to archive comments below
Found that the 2nd option is the correct one after some more digging
For every vertex there can be a normal, texture coordinates, a color and a tangent. These are optional, and can be removed at will. All vertex information is stored in separate arrays of the same size, so if your mesh has 10 vertices, you would also have 10-size arrays for normals and other attributes.
Note that this is just how Unity presents the mesh data for us. Internally Untiy decides which vertex format it should use based on what vertex attributes you actually set on the mesh.
Your answer
Follow this Question
Related Questions
How to create a 2D mesh from a vertex array? 1 Answer
Array of Verticies 1 Answer
Changing Y Value in Vertex Array? 1 Answer
Most Optimal way of computing Delaunay Triangulation given an Array of Vector3 points 0 Answers
How can I find the terminal edges on a mesh, after removing triangles from it? 0 Answers