IK Experiments

My current long term project is a VR mech game, part of which will involve deeply simulated and complexly controlled mechs with a variety of body plans and behaviors. Part of the technical research and development to make this feasible will be creating a powerful and flexible range of constraints that will enable me to translate between relatively simple player inputs into very precise body positions. One standard approach to that is Inverse Kinematics, and so I wanted to explore what I could do with that.

Unity includes an IK chain solver as part of its Animation Rigging package that works pretty good, but I wanted to see what I could do myself. This was motivated partially because there were some behaviors that I couldn’t easily replicate with the built in IK, such as coiling or swinging, and partially because I am stubborn and like to do things myself. That said, I have only delved into the basics of what Unity’s rigging system can do, so there may be a way to achieve what I’m trying to do using more complex constraint setups. They also provide a framework for writing custom constraints that builds into their animation system more thoroughly, which would almost certainly be more efficient once I get a system that I like built.

The first algorithm that I tried was based almost exactly on the IK algorithm on the Wikipedia page for Inverse Kinematics. Roughly, it treats the IK problem as a minimization problem: every degree of freedom is a dimension to be adjusted, and the difference between the chain end position and the goal position is the metric to be minimized. Looking at it from this perspective, I implemented a very simple gradient descent, which made small adjustments along every joint to test what the local area of the problem space looked like, and then pushed a little bit in the locally best direction.

My implementation on the left, Unity’s on the right. They are tracking targets at the same relative position to their bases.

This worked surprisingly alright for short joint chains. There are some strange jerking moments, but those could be filtered out with some additional safeguards. There is a degree of jitter once it reaches the target, but that can also be fixed by reducing the move distance once it got close to the correct pose.

My implementation on the light blue long arm, Unity’s on the orange arm. Notice how the Unity solver looks happy and content, and mine looks vaguely terrified.

I also tested this algorithm on longer joint chains, and the results were less than stellar. The algorithms wasn’t even able to converge on roughly the right pose, and would usually contort itself into a confused line off in a random direction. I think that this is because it is operating on joint rotations, and so changing something at the base wildly affects not only the position of joints down the chain, but also the effects that their rotations have. Altogether, this made for a very confused robot arm.

There are a whole world of improvements that can be made to minimization algorithms generally, as well as a number of tweaks specifically for this problem. While those could potentially be viable, the more specific the algorithm gets the less ultimately flexible it is, and the more time I would have to spend parsing out complicated math. More importantly, I discovered that there existed a much better solution. Next time I’ll be exploring exciting things with FABRIK.

Previous
Previous

IK Experiments II - FABRIK

Next
Next

Fixing the Render Pipeline: Animation Capstone 4