Question by
AnsisMalins · Sep 27, 2016 at 12:10 PM ·
cameratransformmatrix
How do I optimize transforming vertices to screen space?
I think I should be able to optimize this (two vector-matrix multiplies per iteration):
Vector3[] vertices = renderer.vertices;
for (int i = 0; i < vertices.Length; i++)
vertices[i] = camera.WorldToScreenPoint(transform.TransformPoint(vertices[i]));
Into something like this (one vector-matrix multiply per iteration):
Vector3[] vertices = renderer.vertices;
var m = renderer.transform.localToWorldMatrix * camera.worldToCameraMatrix
* camera.projectionMatrix * somethingElsePerhaps;
for (int i = 0; i < vertices.Length; i++)
vertices[i] = vertices[i] * m;
How do I do it?
Comment
Best Answer
Answer by AnsisMalins · May 17, 2017 at 04:38 PM
Solution for posterity:
// Once per frame
var worldToProjectionMatrix = camera.projectionMatrix * camera.worldToCameraMatrix;
var projectionToScreenMatrix = Matrix4x4.TRS(
new Vector3(camera.pixelWidth * 0.5f, camera.pixelHeight * 0.5f, 0),
Quaternion.identity,
new Vector3(camera.pixelWidth * 0.5f, camera.pixelHeight * 0.5f, 1));
var worldToScreenMatrix = projectionToScreenMatrix * worldToProjectionMatrix;
// Once per object per frame
var localToScreenMatrix = worldToScreenMatrix * renderer.transform.localToWorldMatrix;
Vector3[] vertices = renderer.GetComponent<MeshFilter>().sharedMesh.vertices;
for (int i = 0; i < vertices.Length; i++)
vertices[i] = localToScreenMatrix.MultiplyPoint(vertices[i]);