- Home /
Unity Projection matrix
Let's say I have a camera set up in Unity at a specific location and I have the intrinsic parameters of this camera. How do I go about converting this intrinsic camera matrix into a unity projection matrix? (I found some opengl references to the camera frustum but the references were inconsistent and didn't achieve what I wanted anyway.)
Answer by Bunny83 · Aug 30, 2018 at 02:32 PM
What exact intrinsic parameters do you have? Note that a virtual camera doesn't really have a focal length or any other kind of distortion. Everything is just a matter of scale. The focal length is often interpreted as the near clipping plane distance which may be used to calculate the FOV angle (or the other way round). In my matrix crash course i explain the different values of a projection matrix.
Keep in mind that a virtual camera is bound to the screen size. Im most cases you usually specify the desired Fov angle (usually the vertical angle) as well as your desired near and far clipping planes. Changing the near and far planes does not change the actual view of the camera, just where the frustum clips the scene. Note that a too small near clipping plane or a too large far clipping plane will destroy your depth buffer resolution. So always make your near clipping plane as large as possible and as small as necessary. The far plane as small as possible and as large as necessary.
Note that a virtual camera is a linear projection of the 3d scene. Any non linear parameters can not be applied through a matrix. If you need any further help you have to be much more specific about your case. What exact values do you have and what exactly do you want to achieve. Unity's worldspace is a linear space. A virtual camera does a linear projection onto the screen (either orthographic or perspective).
I am trying to approximate an actual 3d scene I have for AR using unity. There's an overhead camera that I've calibrated using OpenCV to obtain the intrinsic parameters. The camera matrix is: [2.5103003493903329e+003, 0, 3.3467489966167909e+002 ] [0, 3.5120948392828368e+003, 4.9385792110962171e+002 ] [0, 0, 1 }] /Any non linear parameters can not be applied through a matrix/ - Is this true? What's the source for this? It seems unlikely that Unity is incapable of copying real cameras.
$$anonymous$$atrix multiplication can only perform linear transformations. When using homogeneous coordinates we can do affine transformations which are linear transformations but allow for the origin to change due to a 4 dimensional shear. $$anonymous$$atrices can represent a linear system of equations but not higher polynomials.
Of course you can "store" any sort of information inside a matrix, but that doesn't make it a useable matrix. If you store you parameters in seperate variables, an array, a matrix, a text file or whatever is irrelevant. Any non linear mapping would need to be preformed "manually" either by software or inside a fragment shader. Though it's still not clear what you want to do here. The point of the calibration of the real world camera is to make its image linear so it actually matches the linear mapping of a virtual / ideal camera. So if your live camera image is calibrated / adjusted all the non-linearity should be removed (at least as good as the calibration is).
Do you by any chance try to distort Unity's camera image to match the image from an uncalibrated real world camera? So in essence apply the inverse calibration to the virtual image? That would be rather strange.
Anyways when you want to apply the calibration to the uncorrected livefeed yourself you have to write a shader that uses the same polynomials used by the calibration method.
OpenCV has a special method for this. Others have already written basic shaders that can apply the undistortion polynomials to a source image. Though again you still haven'T clearly said what you want to achieve. You just throw pieces of information against us and expect we should know what you want to do.
I see. So can I just fix the frustum corners of the unity camera, if I already know which corners form the rectangle to be rendered? Like usually, the frustum corners move together, but can I choose the individual corners of the frustum ?
Your answer
Follow this Question
Related Questions
How do I implement orthographic elements into a perspective camera? 0 Answers
Object moving faster when camera moves with it than without camera movement 0 Answers
perspective distortion for selected objects 0 Answers
2D game with angled top-down camera perspective 1 Answer
How to reposition camera so that given plane point is in given screen position? 1 Answer