- Home /
Can I use Inverse Kinematics just to move parts of the model to certain points in the scene (without having animations)?
I'm feeding the game with coordinate information from a motion tracking camera, and translating it into Unity coordinates. I would want to move hands, head and shoulders to their points that the camera sent, but have the rest of the body move with them as well.
I've read a bit about kinematics in Unity, but it uses Animators and animations, and I'm not sure if that would accomplish what I want.
Edit: For clarity, THIS is what I have, and I want to connect the objects (represented currently by cubes) to other objects, or use an avatar model that moves his hands to where the coordinates received from the camera are. (e.g. Right hand (x: 5, y: 3) Left hand (x: 3, y:2,5))
Answer by Dynosius · Aug 10, 2018 at 09:38 AM
Okay, I managed to achieve my goal, having found a workaround.
So I used Mechanim IK to animate a character's body (that I previously downloaded from Mixamo) and the thing I did was download his Idle animation, so the character didn't really move, which is what I wanted (having IK "without an actual animation").
The result is here: https://www.youtube.com/watch?v=7SWS31l9ze0&feature=youtu.be
I had to put his goals to the vectors that represented gameobjects I was moving through the coordinates given by my camera/application. Added some interpolation and voila.
Code for the controller:
leftHand.transform.position = Vector3.Lerp(leftHand.transform.position, gameobjectVectors[2], Time.deltaTime * 5f);
rightHand.transform.position = Vector3.Lerp(rightHand.transform.position, gameobjectVectors[3], Time.deltaTime * 5f); ;
middleBody.transform.position = gameobjectVectors[5];
// move character to where spine is on the x axis
if (gameobjectVectors[5][0] != 0f)
{
characterVector.x = gameobjectVectors[5][0];
playerCharacter.position = Vector3.Lerp(playerCharacter.position, characterVector, Time.deltaTime * 5f);
}
And IK controller:
using UnityEngine;
[RequireComponent(typeof(Animator))]
public class IKControlScript : MonoBehaviour
{
private RealsenseController RSController;
protected Animator animator;
public Transform rightHandMiddleFinger;
public Transform leftHandMiddleFinger;
private Transform rightHandCoordinate;
private Transform leftHandCoordinate;
private Transform myRightHand, myLeftHand;
void Start()
{
animator = GetComponent<Animator>();
rightHandCoordinate = GameObject.Find("lijevaSaka").transform;
leftHandCoordinate = GameObject.Find("desnaSaka").transform;
myRightHand = rightHandMiddleFinger.parent;
myLeftHand = leftHandMiddleFinger.parent;
}
//a callback for calculating IK
void OnAnimatorIK()
{
if (animator)
{
animator.SetIKPosition(AvatarIKGoal.RightHand, rightHandCoordinate.position);
animator.SetIKPositionWeight(AvatarIKGoal.RightHand, 1);
animator.SetIKPosition(AvatarIKGoal.LeftHand, leftHandCoordinate.position);
animator.SetIKPositionWeight(AvatarIKGoal.LeftHand, 1);
}
}
}
Answer by JVene · Aug 08, 2018 at 04:38 PM
Your not sure because you probably realize that the animations would limit the potential postures of the model, and you want any reasonable posture.
I think you mean inverse kinematics, a subject outside Unity, which (oversimplified) means you position a part of the model that is connected to other parts, probably through joints or something related to joints, which the move appropriately to accommodate that position.
A personalized, illustrative example is our own hand. From infancy we learn how to position our hand to grasp an object we've found through vision. In reality the only control we have over our arm is to set the angles of shoulder, elbow and wrist. Our brain is built to solve the implied math automatically, without formalized measurements. The method is simply to project an imaginary line from the shoulder to the target. This implies a distance from the shoulder, and because there's one elbow and two links (upper and forearm), there is an imaginary triangle formed from shoulder to wrist that includes the elbow. For any given distance to the target there is only 1 angle the elbow can assume to exactly match the target distance. That is the first calculation required to control the arm, and happens to be the first calculation required to solve the reverse kinematic puzzle, that of controlling the arm from the target backwards to the shoulder.
The second step, for the infant or practiced adult, is to adjust the shoulder joint's angle, as this "sweeps" the "system" created in the first step to align with the target. With these to angles the wrist is positioned at the target, the hand may use the wrist for small error adjustment, and the target is reached. Both calculations are done with trigonometry (in the case of arms), specifically law of cosines, a bit of law of sines and maybe some simpler right angle trig.
That is inverse kinematics in a nutshell. That process can cascade through a number of joints and positions, but in the case you're describing brings a lot of conditions to mind. If, for example, in my previous personalized example, the target is too far away, the elbow can straighten out (for maximum reach) and still not touch the target. The toddler gets up to walk as a result.
That last decision hints that for your application, you'll need to consider either a rule based "expert system" that can be hard coded (but limited), or AI that can learn what to do (a bit advanced and difficult), or some balance between these extremes which is, then, a smart bit of code that can respond to a variety of situations with workable solutions, like walking toward a target - hopefully not with an outstretched arm like a toddler walking toward an objective (parents spend two or three years of their parenting career trying to guard against that).
Since I can't see the input from tracking data you're getting, I must assume that you could identify that data indicating the shoulder position, as that is basically the body's position. If you're ignoring the slight twists and bends of the spine, that positions the character (otherwise you probably have similar data from the waist, which combine to be the body position). The head is a single relative attitude (an angle in 3D really) which is relative to the shoulder (the math is much simpler, overall, for that). The "tough" part, if trig is tough, are the arms as given by the position of the hands, but if you keep your wits about you and use trig to work backwards from the hand's position to the shoulder, you can calculate the solution. There is a caveat - there are usually two matching solutions, one that's right, and one that makes it look like the elbow joint is broken (in the wrong direction). You just have to "sense" what solution puts the elbow below the imaginary line from shoulder to hand, as viewed from the body's local coordinate system (just in case the person is upside down for some reason).
I'll give you a short start on the trig. Look up law of cosines. Match upper arm and forearm to two sides of the diagram documenting the trig, the imaginary line is the third line. Now, calculate, from the formula given on the doc's page for trig, the angle opposite the imaginary line. That's the elbow angle required.
The second objective is to position the upper arm. Relative to the body's local coordinate system, find the angle of the imaginary line (in 'pure' trig, this would be the line's world coordinate oriented angle or slope). Now, using the of cosines, calculate the angle at the shoulder. This second calculation is the interior angle of the implied triangle. Add the 'world' (or body relative) angle of the imaginary line to the shoulder angle, and you get the angle of the upper arm (in 'world' or body relative format). That's the angle to position the shoulder. When you position shoulder joint and elbow to these two angles, the arm is about where it should be.
The next tricks I'll leave to you, but that system, the shoulder/elbow combination, can rotate on a human (you can lift your elbow 'out' to the point that the system moves sideways relative to the ground, or perpendicular relative to the ground). If you don't have tracking data on the elbow itself, you'll have to improvise, because that doesn't actually change where the hand is...the system can 'spin' about 90 to 100 degrees on most people, from where the elbow is tucked into one's side, to where the elbow is rotated upward and outward. All of that is then compounded by the fact the shoulder joint can rotate freely (unlike the elbow) in two coordinates (like a joystick). It is a calculation very much like that described so far for the upper arm/shoulder joint angle calculation, but instead of being an XY relative adjustment, it's XZ. To be clear, the calculation I described was envisioned to provide vertical alignment to target after the elbow angle is known, but the shoulder joint on a human can do the same for a horizontal alignment to the target without rotating the entire body - both sweep in circles, one vertically oriented, the other horizontally oriented.
Thank you for the detailed comment.
I've been thinking about the issue, and thought that maybe I could connect parts through joints, trying to force them to stay together. Would that be a simpler solution towards reaching a believable body model when trying to represent coordinates from my camera?
Part1:
Comments are limited, so this is in 2 parts
I doubt believable, and while the HingeJoint is the prime candidate, all the joints have a sensitivity to being forced beyond certain limits. Of particular issue is that 'cascades' of joints, like that of shoulder to elbow to wrist, tend to be a problem, but that's more about driving the joints to move, not inverse kinematics.
While I haven't worked on something very similar to your requirement, one project that comes close is a robotic arm with shoulder, elbow and wrist joints (a simulation for a robotics contest where students build actual robots, and the simulation became a testing simulation of their bot designs). What is similar about the design is that the user interface to control the robotic arm (which was done both in Unity and on a real bot using a microcontroller where software is written in C) is the inverse kinematic requirement.
The interface focused in positioning the 'hand' of the bot. The arm was required to position the joints (which are the only real control one has over the arm) based on where the hand should be given in basically X, Y coordinates relative to the should joint's position. That is the part that is similar. In my case, however, I had to move the arm over time, based on the speed of the motors driving the joints, to the goal position, but the math implied is the same.
At first the joints seemed to work. It was encouraging, so I finished the arm. It failed miserably. PhysX, the engine under the hood in Unity, has a new version that offers what are called articulated joints, and they work (outside Unity, PhysX standalone) extremely well. Alas, they're not exposed in Unity, so they're not an option. The joints were beyond jittery, the elbow malfunctions when the shoulder moves, and the wrist malfunctions if either the shoulder or elbow move. It was absurd. I dropped joints entirely and switched to just using empty GameObjects at the position of the joint, manipulated the rotation of the joint in code, and it worked exactly as I intended (with the $$anonymous$$or issue of self collision where the arm can collide with itself and the bot's main frame).
Part 2:
So, my reaction is that it isn't likely simpler in reality. It will seem so at first, but joints are easily torn apart (even when breakage is set to infinity). You also end up with the issue that you have to control for the elbow limits, that is, on the human arm, the elbow is basically always below the shoulder-target imaginary line, but a 'raw' ragdoll kind of relationship doesn't enforce that. The elbow will have the tendency to hang, maybe even jostle, jiggle and wiggle about. I base that on what I've been able (and not been able) to get Unity to do. I have lots of experience in 3D rendering/physics/film post production/game engines, but Unity is a recent target for me. I'm a C++ programmer primarily, so even C# is recent to me (recent as in a few years, whereas I have decades behind me in C++ and C). Based on reading other posts here, referencing other PhysX behaviors from other game engines, I came to the conclusion that until I can access articulated joints, the 'regular' joints Unity now exposes are just not able to 'do arms' at all. Frankly, joints as they are in Unity are really quite weird.
I'm not saying you shouldn't give it a try, but expect the results to reach similar conclusions as I and several how try it have reached.
That said, I'd like respond to your question, "would that be a simpler solution". I take it the idea of trig isn't pleasant to you. Bring up a spreadsheet, and find references like those I mention (law of cosines, law of sines, right angle trig) and imagine an arm with an imaginary line from shoulder to target (or wrist/palm). Use the spreadsheet to experiment with the math. You'll discover that the trig is just a few references to some typical multiplications and divisions with a few squares, square roots, etc. Treat it like code snippets in an old language. Use the spreadsheet to see how it works. That will translate into code for C# fairly easily ($$anonymous$$athf.Cos, $$anonymous$$athf.Sqrt, etc). Once you get those fashioned, they just do their job and you end up with control over the arm based on where you want the hand to be. Once that's built you'll realize the result is as simple as you wanted the use of joints would be, but without the baggage those joints bring with them.
Your answer
![](https://koobas.hobune.stream/wayback/20220612172436im_/https://answers.unity.com/themes/thub/images/avi.jpg)