Wednesday, September 4, 2013

Turncoat Dev Diary: Help! I'm Falling and I Can't Stand Up…


Just as I started trying to figure out how the game's touch controls should work, I  began to be really bothered by a couple of problems in the basic movement of my character. One of those things, I've mentioned before, is the funky camera accordioning in the arc right and arc left animations. Turns out, those issues were more than cosmetic; the stuttering camera combined with the fact that stopping isn't instantaneous made it virtually impossible to line up the character precisely as you stopped moving.



It was easy enough to solve, though. I simply removed the arc left and right animations from the blend trees in my state machine, then added some code to my character controller class to simply rotate the whole character as she walked:

    transform.Rotate(0, horizontal * turnSpeed * Time.deltaTime, 0);

The turnSpeed variable can be set in the inspector, so it can be adjusted on a per-character basis. The horizontal value is pulled from the x-axis of the joystick or determined from the left/right buttons or touch screen controls. The resulting turn animation is a tiny bit less realistic than using the animated left and right turns. You'd think that just rotating the whole character a small amount while they walked forward would look really fake, but it doesn't. Maybe it's simply the fact that this is the way most third person games ever created, including pretty much every MMORPG, have worked. Maybe our eyes are just accustomed to this particular cheat. Either way, I'm willing to sacrifice that tiny bit of realism for better, more precise controls.



After playing with it a bit, I decided that turn speed probably shouldn't usually be the same when walking and running. Instead of just setting a single turn speed in the inspector, I'll let you set both a walk and run speed and then interpolate between them. They can be set the same using this approach, but they don't have to be.

float turnSpeed = (turnSpeedDifference * currentRun) + walkingTurnSpeed;
transform.Rotate(0, horizontal * turnSpeed * Time.deltaTime, 0);

The variable turnSpeedDifference gets calculated once at startup, since I don't anticipate these values changing at runtime:

turnSpeedDifference = runningTurnSpeed - walkingTurnSpeed;

I'm pretty happy with turning now, but there's another problem that I didn't notice until I expanded the playing field. The original field was sufficient for testing the basics of movement, but I realized that once I moved beyond the basics, I'd need ways to test things like crawling through vents and taking cover, so I expanded the test field, making it taller and longer. I added some ducts the same size as the ones in the prototype level, added some objects to take cover against, and added a third level platform and another set of stairs. That made the test level look like this:


As I started exploring this expanded test level, I realized that falling from a distance greater than, maybe the equivalent of two to three meters, looked unnatural because my character would try to  walk or idle. Walking on air is a pretty neat trick, but not very realistic.

The provided CharacterController class, which is what I've been using to handle basic interaction with the environment (climbing stairs, being affected by gravity) has a method called IsGrounded()¹ that will tell you if if you're standing on the ground. If you're walking, running, or idling, this will return true. If you're jumping or falling, it will return false.

That's the theory, at least. It always returns true for me no matter what my character is doing. Now, I understand why it might not work when jumping because the elevation increase is baked into my jump animation - the character controller doesn't actually leave the ground. The bone colliders move up into the air, so interaction with props is correct, but the implicit collider used for interacting with terrain does not. As a result, IsGrounded() is returning true. More confusing to me, though, was why it's returning true when I fall off of one of the higher levels. I had no working theories about why it wasn't working as expected. Even when falling off a third story platform, it would never report false for IsGrounded().

Because CharacterController is an opaque class provided by Unity, there wasn't an easy way to debug why it wasn't working as expected, so I decided to stop using the provided class and roll its functionality into my own controller. I removed the CharacterController component and added a RigidBody component (the component in Unity that makes something part of the physics world) as well as a CapsuleCollider. Because my character is part of the layer Player Controller, just like CharacterController used to be, CapsuleCollider should only interact with terrain, not with props, which will be left to the bone colliders. In theory, everything should work just like before except for situations that were being explicitly handled by the CharacterController class.

Surprisingly, the swap worked really well. It works way better than I expected, actually. Without implementing the IsGrounded() functionality, I'm already able to move around the level just as I was before. I had to tweak various values on the RigidBody and CapsuleCollider components to get things just right, but it turns out I was getting far less benefit from the CharacterController component than I realized. Even climbing up slopes and stairs works pretty much as expected.

Pleasant surprises like this one are few and far between. I expected to put a lot more work into replicating the functionality I was getting from CharacterController, so I took a moment to savor the victory.

Then it was time to turn my attention to figuring out when my character is grounded, when they're jumping, and when they're falling so that I can show the correct animation for each situation.

I tacked whether they're grounded first. There's a couple of different possible approaches here. The one I opted for is to simply cast a ray straight down from the player to determine the distance to the ground. If that distance is greater than what it is when they're just standing, we know the character is not grounded. In my case, that looks a bit like this:

        RaycastHit groundHit;
        Physics.Raycast (origin, transform.up * -1, out groundHit, 100f, groundLayers);
        grounded = groundHit.distance - groundedDistance <= 0f;

The variable origin is the calculated center of the capsule collider. The second parameter to Physics.Raycast is the direction I want the ray cast in, which is straight down. If we multiply transform.up by -1, we get a vector pointing straight down from the character. I don't know why Unity provides a method to give you a vector pointing up, but not one for pointing down, but multiplying the up vector by -1 gives us a down vector.

The third parameter is used to determine what object was hit, if any. C#, like Java, doesn't have pointers, so that funny out keyword is used to pass groundHit by reference rather than by value. As I've said before, I don't hate C# nearly as much as I hate Java, but there are still times when this language bugs me. Here's one example. I miss pointers. I know many devs feel we've outgrown the need for pointers and that our languages should hide them from us but, personally, I find this whole out² business to be far clunkier than simply passing the address of a variable. I understand some of the security concerns around pointers, but all the other arguments against them ring hollow to me.

Anyway, the next argument (100f) simply tells the ray cast to stop looking if it hasn't found something within 100 units. In my test level, units are roughly equivalent to meters, so that should be far enough to hit the ground no matter where I am on the level. The final argument is called groundLayers, and this one's a little confusing.  This is a bitwise mask field used to specify which layers I want it to look for when ray casting. It's very similar to the physics settings I used previously to keep the bone colliders and character collider from interfering with each other.

Determining which values correspond to which layers is a little confusing but, fortunately, you don't need to. You can declare a public LayerMask variable, and Unity will present a user interface in the inspector to let you select the layers to be included.


Once I have the results of my ray cast, it's relatively easy to figure out if I'm grounded. The variable groundedDistance is half the height of the capsule collider plus a small amount extra to account for small terrain changes. I'm ray casting from the center of the collider, so the ground should be half the collider's height away. If it's further than that distance (plus a little slop), we're not grounded.

In my testing, this works perfectly, except the jump problem is still there. My capsule collider doesn't move up as the character jumps, so this code reports that we're grounded when we're jumping.

For the jump problem, all I have to do is add a boolean variable to the class to track when a jump starts, and when it ends. Only, it's not quite that simple. When you tap the jump button, that starts the jump animation. With a running jump, the character immediately springs into the air, but with a standing jump, there's a build up as the character bends their knees down and then springs up. In both instances, the character's feet hits the ground some time before the animation ends. It seems like that's the point where we want them to start falling. We don't want them to land on thin air and then start to fall down.

The first thing to do was to figure out the exact timings for my two jump situations. After some trial and error, I came up with these values:


    // Timings used on the jump. Running jump starts immediately and transitions immediately back    private static float jumpResetDelay = .1f;                          // Used to set the Jump input back to 0    private static float runningJumpAnimationDuration = .416667f;       // Running jump animation is reported as .867f seconds,                                                                         //    but is actually .416667, Mixamo probably trimmed on import
    // Standing jump has a build up and recover, so feet don't leave the ground immediately, and animation continues    // for a short period of time after    private static float standingJumpLeavesGround = .5f;                // When the feet leave the ground on standing jump    private static float standingJumpBackOnGround =  1.2f;  

Now, the trick is to use them. This is one of those areas where language differences bite you. Pretty much every mechanism that I would use to accomplish this in Objective-C or C aren't available in Unity using C#. Apparently, for performance reasons, Unity's APIs are not threadsafe. Even though C# supports threading, Unity kinda doesn't. Instead, the suggested way to do something like this is to use these funky things called co-routines, which are functions that yield execution back to the calling thread. In Unity, these functions will fire on the main thread, but they can yield time back to the main thread, similar to a thread sleeping for a specified period of time.

After some playing around, I cam up with something that seems to work well. When the jump button is tapped, this co-routine fires:

IEnumerator TriggerJump(bool isRunning)
{
    if (isRunning)
    {           
        jumping = true;
        AnimatorSetJump(true);
        yield return new WaitForSeconds(jumpResetDelay);
        AnimatorSetJump(false);
        yield return new WaitForSeconds(runningJumpAnimationDuration - jumpResetDelay);
    }
    else
    {   AnimatorSetJump(true);
        yield return new WaitForSeconds(standingJumpLeavesGround);
        jumping = true;
        yield return new WaitForSeconds(jumpResetDelay);
        AnimatorSetJump(false);
        yield return new WaitForSeconds(standingJumpBackOnGround - (standingJumpLeavesGround + jumpResetDelay));
    }
    
    jumping = false;
}


If the character is doing a running jump, the public variable jumping gets set to true immediately, but if they're doing a standing jump, then we wait until the character's feet actually leave the ground to set it. In both cases, we set the Jump input to the animation state engine back to false after a short delay to make sure we don't accidentally trigger a second jump animation, and then, when the character's feet are back on the ground, we set jumping back to false.

Back in our Update() method, we should not be able to check at any point to see if our character is jumping or not and get the correct value (though some tweaks to the timing are to be expected during testing). Knowing this will help us avoid falling into gaps that we're trying to jump over, for example. Now that I can tell when we're jumping, I can update the grounded check to take jumping into account.

    Vector3 origin = transform.position + transform.up * movementCollider.center.y;

    RaycastHit groundHit;
    Physics.Raycast (origin, transform.up * -1, out groundHit, 100f, groundLayers);
    grounded = groundHit.distance - groundedDistance >= 0f && !jumping;

Now that I have a reasonably accurate way to determine if the character is grounded, I should be able to tell when to fall and when to stop falling, right?

*sigh*

I knew I'd pay for that earlier bit of serendipity. Turns out, the whole falling thing is a harder than I expected. I implemented the code to start falling when the ground is a certain distance away. I made that distance configurable, since it could conceivably change based on the character's height, and then set it for this character. It mostly worked. There are some edge cases, such as when going up stairs fast, where it needs to be tweaked but, for the most part starting a fall works as expected.

Landing, however… Well, landing doesn't work so well. The character "lands" a few feet above the ground and then settles down to the ground as they start to stand up.

This one made me pull my hair out. It made no sense to me.

It wasn't until I watched the character in Unity's scene view that I realized what was happening. The character's height changes as they fall, and then again as they absorb the impact of the fall, but the capsule collider being used to figure out when they've hit the ground doesn't change in height, so we detect hitting the ground while our character's feet are still a few feet above the ground.

That might make more sense if you see it in action:


You can see how significant the difference in height is in this screenshot:


There's a couple of ways I can fix this. The way that the Unity Mecanim tutorials show is to use an animation curve and tie the height of the capsule collider to that curve.

We do need to make the capsule smaller and adjust its origin up a little so it overlaps our character while falling but, in addition to that, we're raycasting from the center of the capsule in code to figure out if we're grounded, so we have to account for this change in height in that code as well. Since I have to write code to deal with this, I think I'd rather handle the capsule collider changes there as well. By saying that, I probably sound to the Unity folks the way people who refuse to use Interface Builder sound to us old school Mac and iOS devs, but it seems logical to keep the functionality in one place.

With some trial and error, I found the right values and timings for resizing the collider. Those will likely need some tweaking as I test more, but I'm pretty happy with the overall result. I was just about ready to move back to figuring out touch controls when I started noticing another movement problem. When I ran up stairs or up the slope, it would sometimes start falling at the top, even though there wasn't any way they could possibly fall there.

Ray casting doesn't take into effect the size of the collider, it just draws a line straight down from specified point. There's a small gap between the top stair and the platform. It's tiny - not big enough for a person (or our collider) to fall through, but if the ray cast happens to be exactly over that gap when I do my check for falling, we get a false positive for needing to fall, and the wrong animation gets kicked off.

I could cast multiple rays down to make sure the gap isn't too small to fall through, but Unity actually provides a way to do a ray cast that takes X and Z size into account. It's called a Sphere Cast, and I stumbled upon it purely by accident.  Fixing this issue turned out to be a matter of simply changing my ray cast cal to a sphere cast call, using the radius from the capsule collider

    RaycastHit groundHit;
    //Physics.Raycast (origin, transform.up * -1, out groundHit, 100f, groundLayers);    Physics.SphereCast(origin, movementCollider.radius, transform.up * -1, out groundHit, 100f, groundLayers);
    grounded = groundHit.distance - groundedDistance <= 0f && !jumping;

At this point, basic movement is working pretty well. I can walk, run, jump, and fall down fairly realistically. I still have to do crouch and and cover, but with these fixes, I think I'm finally ready to start exploring touch controls.

Next: Touch Controls
Previous: Prototyping Player Game Mechanics, Episode II



1: Yes, this is correct. The accepted convention in C# for naming methods is to start them with a capital letter. Considering this language came from the same people who gave us Hungarian Notation, however, this is a pretty tolerable bit of ugliness.

2: It seems simple, right? Specify out if you want to pass by reference, leave the keyword out if not. Only, it's not quite that simple. You can also use the keyword ref to specify you want an argument passed by reference. Two ways and both that do the same thing but if you use ref, the variable has to  initialized before it can be passed in. With out, the variable doesn't need to be initialized. This isn't simplicity, it's just different complexity with less power.

No comments:

Post a Comment