- Home /
How do I reproduce the MVP matrix?
I'm trying to reproduce glstate.matrix.mvp. Does this look right?
objectTransform.localToWorldMatrix*
cameraTransform.worldToLocalMatrix*
cameraTransform.projectionMatrix
Because it doesn't work if I use it in a vertex shader instead of glstate.matrix.mvp. The object disappears entirely and I can't find it any more.
Answer by Daniel-Brauer · Mar 12, 2010 at 05:19 AM
So it turns out I had two problems. The first should be obvious to any graphics programmer: my matrices were in reverse order. I didn't stop to think that even though I is called the MVP matrix, the actual order of the matrices in terms of matrix multiplication is PVM, as they are left-multiplied with any given vector.
The second problem was far more difficult to figure out: I needed to switch the sign on the third and fourth entries in the third row of the projection matrix. No idea why, but if you don't do this, your camera will be looking backwards and have its coordinate system flipped. This is on a Mac in OpenGL, by the way. I haven't tried it on Windows yet.
Update: I have since talked to Aras about exactly what to do and why. Here are the values I used. It works on both Mac OS and Windows.
Further Update: I made the following code into actual C# to avoid confusion.
bool d3d = SystemInfo.graphicsDeviceVersion.IndexOf("Direct3D") > -1;
Matrix4x4 M = transform.localToWorldMatrix;
Matrix4x4 V = camera.worldToCameraMatrix;
Matrix4x4 P = camera.projectionMatrix;
if (d3d) {
// Invert Y for rendering to a render texture
for (int i = 0; i < 4; i++) {
p[1,i] = -p[1,i];
}
// Scale and bias from OpenGL -> D3D depth range
for (int i = 0; i < 4; i++) {
p[2,i] = p[2,i]*0.5f + p[3,i]*0.5f;
}
}
Matrix4x4 MVP = P*V*M;
Left-handed coordinate system when math functions are right handed?
Thank you! I had to change it a bit to make it work without error, and I have a doubt how to use it properly (seems not to work). I updated my post below.
Surprisingly GL.GetGPUProjection$$anonymous$$atrix(camera.projection$$anonymous$$atrix) does not do this.
Answer by bam93 · Jan 06, 2011 at 09:23 AM
Hi, could you please post the final working code? I have the very same problem and would appreciate it very much :)
Update: Using the posted code I did some tests. First, to make the code run/compile without errors I had to change it as follows:
d3d = SystemInfo.graphicsDeviceVersion.IndexOf("Direct3D") > -1; M = transform.localToWorldMatrix; V = camera.worldToCameraMatrix; P = camera.projectionMatrix; if (d3d) { // Invert Y for rendering to a render texture for ( i = 0; i < 4; i++) { P[1,i] = -P[1,i]; } // Scale and bias from OpenGL -> D3D depth range for ( i = 0; i < 4; i++) { P[2,i] = P[2,i]*0.5 + P[3,i]*0.5;} } MVP = P*V*M;
Just let me briefly explain what I am trying to do. In order to make a Cg shader work properly in Unity, I absolutely need the ModelViewProjection.Inverse matrix. It is not available in Unity (for compatibility reasons with D3D if I understand correctly). So what I am trying to do is to calculate this matrix in a script and then provide it to my shader. The shader has a simple cube as input mesh and ray-casts a sphere from this. So my precise question is, what has transform and camera to be in my case. I tried the following, but I do not get the expected result (eg the sphere, which works fine outside of unity using mvp.inverse):
function Update () { d3d = SystemInfo.graphicsDeviceVersion.IndexOf("Direct3D") > -1; M = GameObject.Find("Cube1").transform.localToWorldMatrix; V = Camera.main.worldToCameraMatrix; P = Camera.main.projectionMatrix; if (d3d) { // Invert Y for rendering to a render texture for ( i = 0; i < 4; i++) { P[1,i] = -P[1,i]; } // Scale and bias from OpenGL -> D3D depth range for ( i = 0; i < 4; i++) { P[2,i] = P[2,i]*0.5 + P[3,i]*0.5;} } MVP = P*V*M; Shader.SetGlobalMatrix("_matMVPI", MVP.inverse); }
This code is run in a script attached to the main camera, but I guess it doesn't matter to what I attach the script. The Cg shader accesses _matMVPI in its vertex shader using
float4x4 ModelViewProjI = _matMVPI;
I wonder whether I use the right M, V and P matrices, eg M from the cube's transform and V, P from the main camera. Of course it's always difficult to debug what's going on inside the shader. I tried to multiply mvp and its (supposed) inverse, and change the color if eg the 1st matrix element is 1.0, but that doesn't work out (which is in agreement with the fact that the sphere doesn't render as it should).
You should be able to call inverse(UNITY_$$anonymous$$ATRIX_$$anonymous$$VP)
to get the inverse model view matrix directly in the shader?
EDIT: This of course assumes you are using CGPROGRA$$anonymous$$ for the shader, not ShaderLab.
Answer by MUGIK · May 26 at 03:07 PM
This worked for me:
Matrix4x4 GetVPMatrix ( Camera cam )
=> GL.GetGPUProjectionMatrix(cam.projectionMatrix,true) * cam.worldToCameraMatrix;