- Home /
How do I calculate a view matrix using Matrix4x4.LookAt?
I'm implementing a custom camera class, and this of course involves the calculation of a view matrix, to pass to my shaders.
I am using the following code to calculate this matrix:
viewMatrix = Matrix4x4.LookAt(
camera.transform.position,
camera.transform.position + camera.transform.forward,
camera.transform.up
);
I've set up an instance of my custom camera and a regular Unity camera with identical transforms for testing: position = (-8, 5, -12), rotation = (0, 0, 0), scale = (1, 1, 1).
However, when I compare the view matrices, my matrices are slightly different from Unity's, as seen in the frame debugger:
The values at (3, 0), (3, 1) and (3, 2) have the wrong sign for some reason, probably because I'm using the LookAt function incorrectly. Does anyone know what I'm doing wrong?
Answer by Sinterklaas · May 17, 2021 at 12:49 AM
EDIT: the original accepted answer is bad. Use this instead, and see the original post below the line and replies for an explanation.
viewMatrix = Matrix4x4.TRS(transform.position, transform.rotation, Vector3.one).inverse;
if (SystemInfo.usesReversedZBuffer)
{
viewMatrix.m20 = -viewMatrix.m20;
viewMatrix.m21 = -viewMatrix.m21;
viewMatrix.m22 = -viewMatrix.m22;
viewMatrix.m23 = -viewMatrix.m23;
}
Ok, so I seem to have found the solution, partially thanks to Bunny83's suggestion about inverting matrices, partially through pure trial and error:
viewMatrix = Matrix4x4.TRS(Vector3.zero, Quaternion.identity, new Vector3(1, 1, -1))
* camera.transform.localToWorldMatrix.inverse;
Appears to produce a view matrix identical to Unity's, as long as the camera object's scale is left at (1, 1, 1), which is acceptable behaviour for my use-case.
I still don't get why Matrix4x4.LookAt gave me wrong results, since afaik it takes world-space values as inputs and outputs a worldspace-to-viewspace matrix, but oh well.
I've figured out why the scale caused problems: Unity cameras ignore scale entirely. The following code should produce a view matrix identical to Unity's, no matter the camera's Transform settings:
Vector3 antiScale = new Vector3(camera.transform.lossyScale.x, camera.transform.lossyScale.y,
-camera.transform.lossyScale.z);
viewMatrix = Matrix4x4.TRS(Vector3.zero, Quaternion.identity, antiScale)
* camera.transform.localToWorldMatrix.inverse;
That doesn't make much sense. If you want to calculate the view / camera matrix of a camera manually you just do
Transform camT = camera.transform;
var cameraMatrix = Matrix4x4.TRS(camT.position, camT.rotation, -Vector3.forward).inverse;
Trying to revert the scaling of the object won't work if camera is nested and the parent and camera has a non uniform scale. It's best to avoid using the scale at all.
ps: Unity already provides you both matrices: localToWorld and worldToLocal. So grabbing one and calculating the inverse is kinda wasteful, especially since inverting a matrix is not a trivial operation. Of course to actually calculate the camera matrix without the scale, there's no way around that.
pps:
I still don't get why Matrix4x4.LookAt gave me wrong results, since afaik it takes world-space values as inputs and outputs a worldspace-to-viewspace matrix.
No, that's not what LookAt does. LookAt creates a matrix that makes an object to "looks at" a certain point in worldspace coordinates. This matrix does still convert from local to worldspace. View / camera space is the local space of the camera object and therefore you need the inverse.
Ok, late reply, but I think I've got this figured out now.
So this solution didn't work for me and I couldn't work out why for a little while. Turns out some platforms, including my computer, have a reversed z-buffer.
This edit of your solution should hopefully work on all platforms.
viewMatrix = Matrix4x4.TRS(transform.position, transform.rotation,
Vector3.one).inverse;
if (SystemInfo.usesReversedZBuffer)
{
viewMatrix.m20 = -viewMatrix.m20;
viewMatrix.m21 = -viewMatrix.m21;
viewMatrix.m22 = -viewMatrix.m22;
viewMatrix.m23 = -viewMatrix.m23;
}
Answer by Bunny83 · May 16, 2021 at 10:39 PM
The view matrix is supposed to transform from worldspace to camera space. So you need the inverse of the transform of the camera since the normal transform transforms from the local space / camera space to worldspace and not the other way round.. Though different renderers (DirectX, OpenGL, webGL, OpenGL ES) may require slightly different formats. Especially the z axis which is also inverted. This is often the case when we need to transform from Unity's left handed system into the usual right handed system.
ps: To get the view / camera matrix from a Unity Camera, use the worldToCameraMatrix property.which takes care of all necessary tweaks. Of course when you roll your own camera logic you're on your own.
Your answer
Follow this Question
Related Questions
Unity camera troubles 1 Answer
Camera frustums + custom field of view 2 Answers
How to retrive camera pixel data? 0 Answers
Setting the SceneView View Angle/Position? 1 Answer
Check if specific camera sees object 0 Answers