Body Blend Reference - *** Email questions to zcgray-at-southern-dot-edu or schillerm-at-southern-dot-edu ***

 

Initial Considerations: -- (NEW UI AND ERROR CHECKING COMING SOON)

 

Skip ahead for a tutorial.

 

Keep in mind that the UI is in beta, and does not trap many errors. Don't try it in a production environment until you have a chance to learn the idiosyncrasies. An updated UI is in the works, that will allow for more versitle control of the driven keys. If you want to just get the extracted shape, use a driver 'none'.

 

Body blend requires that you have a skin that is free from extraneous history and has frozen transformations before binding. Obviously, one blend shape node will exist per mesh. Keeping as much info in one blend node keeps blends from fighting or overlapping in strange ways.

 

Be absolutely certain that you DO NOT MOVE the skin or bones before establishing the ‘bind pose’. A quick test is to detach the skin, and see if the skin moves at all.

 

Installation

Download bodyBlend.mel

*** A FEW BUG FIXES UPDATED NOV 21 *** Copy to your scripts directory. Source bodyBlend.mel. Run bodyBlend to load UI.

 

Usage

Create:

·        -Character Set- Enter the name of your character into the “Character Set” field. Since Maya’s implementation of the ‘bind pose’ is a bit rigid and limiting, a new ‘bind pose’ is created based on a character set. This character set must include all of the channels required to achieve a ‘zeroed’ out skin cluster. In essence, we are creating our own ‘bind pose’ using a character pose. If you are not accustomed to using character sets, you must make a rudimentary set for this process, and you may delete it later.

·        -Set Bind Pose- Set the body blend ‘bind pose’. This sets the position where each vertex’s space lines up to world space. No bones should be moved after binding before this pose is set. If you want to check to be sure the set was created properly, open your visor and look for the pose in the ‘Character Poses’ tab.

 

Setup:

·        -New- Before clicking ‘New’, pose the character in the first position that needs sculpting. Be sure to select the target surface that will hold the (the one that you will be deforming).  You will be prompted to name your blend node, this can be called whatever makes the best sense to you and your organizational scheme. We don’t recommend making more than one blend node per surface, but you can have many nodes on a character. You will be prompted for a -Target Name-. This represents the first pose that you will sculpt Again, your character should be in this pose before clicking new. You can always go back to your established bind pose by clicking the ‘bind pose’ button. Body Blend will automatically duplicate a working copy, and hide your original skin. Read ahead for descriptions of visibility modes. Sculpt this copy to achieve the desired deformation. Feel free to use whatever method works best to achieve the pose – artisan, average vertices, clusters, lattices, etc.

·        -Driver-Select the ‘Driver’ method that you prefer for your first target. Many people rig blends to joint rotations, but joint rotations are unpredictable. There can be several combinations of XYZ, ZYX, YXZ, etc.. rotations that can achieve a specific pose. To make the achieving the pose move predictable, we implemented a spherical directional ‘vector’ model for achieving the pose. You can think of a pose as a point on the surface of a sphere, and the joint at the center of that sphere. As the joint changes orientation, it will either become closer or further from that ‘pose point’. If you choose, you may hook the pose to a joint rotation or select ‘none’ to drive the blend manually. The –Relative To- selection adjusts how the driven key is setup, different poses may require different approaches. Read ahead for descriptions of each driver type.

·        -Vector- Predictable, three dimensional model for determining joint position. The child joint is used for determining the current vector. Select the child of the joint that will determine the orientation. In order for the vector to update properly, it must be parented to the joint above the one that determines the orientation.

·        -Joint- Standard single axis driver. Select the preferred axis of rotation to drive the blend from.

·        -None- Allows you to manually drive a blend target. This simply generates a target for the current pose.

·        -Match- This performs the relative vertex space to world space conversion. It will also toggle the visibility modes to reflect the actual skin and not the target.

·        -Add- Use this method for establishing non overlapping blend targets. If for any reason you think that there may be some blends already in existence influencing the joint area that you are going to be sculpting, by all means use the stack command instead.

·        -Stack- This super cool function allows sculpt deformations to accumulate. One target can be relative to preexisting targets, so the final result is the addition of all of the targets. The difference between the existing targets and the new target is removed for the specific pose. Be aware if you edit input targets to a stacked blend, the stack will not update without being ‘matched’ again. This works by reordering the tweak, then rolling off the envelope of the incoming blends to extract the stacked shape.

·        -Delete- Removes the blend target and its control system.

 

 

Current

·        -Blend Nodes- Double clicking the blend name will select the blendNode for easy driven keys.

·        -Targets- Double clicking the target will select and assume the pose that the target was originally created in. Single clicking the target in the list will select it for editing driven keys. Multiple selections are allowed, but only to assist in driven key setups.

·        -Visible- For the target selected in the list, toggle the visibility of the sculpt target, original skin, the extracted relative target, or if applicable, the distance tool for the vector driven control.

·        -Edit- At any time, you may edit the target with the existing controls. At this point, there isn’t a simple way to update the controls with an existing target, aside from making a new control and target, and doing a polyTransferVertices from the old target.

Driven Key

·        When you create a extracted blend target, and if you choose a built in driver method, then the Driven Key section is activated. You can then use the slider to set the distribution and entrance of your pose. First key the shape at the current pose, then click the bind pose button, and set the second key.

Refresh

·        At times, the UI list may not properly reflect your current selection. Use the refresh control to redraw.

 

 

 

 

How it works:

Prior to binding a skin in maya, all the vertex transformations occur in world space. When you move a point along the Y axis, it naturally moves up along that axis. After binding, each vertex inherits the blended weighting of each of the transformations of the input bones. When you attempt to move a point along the Y axis, you may find that it moves in a arbitrary axis. Instead of moving in world space, it’s now moving in ‘crazy space`. Now, each vertex is moving in its own local coordinate system that is derived from the input transformations from the bones. A workaround is to duplicate the target at the bind pose, then move the character to the problem pose, then sculpt the duplicated object to achieve the result. This still does not solve the ‘crazy space’ problem, as you must watch the target very carefully as the coordinates are still confused.

 

http://www.optidigit.com/stevens/rigtut.html

 

 

Rather than try to extrapolate and reverse engineer all of the input transformations, we simply look at the local coordinate system of each vertex and determine what needs to happen to align that coordinate system to world space. Then a copy of the model can be manipulated in world space, then the inverse transformation can be applied to the original bound shape. This operation occurs at the problem pose, and then the input transformations are removed by moving the character back to the bind pose (where the joint transformations line up with world space coordinates), and extract the result. Using this method, we can quickly extract targets. In our test case, we ran 10k vertices in 30 seconds. Most operations feel instantaneous. While quite visionary, prior implementations took a *very* long time to compute.

 

http://www.animagicnet.no/maya/bodyShape/

 

The tweak node used to store post-skin transformation data. We use it to temporarily store the local transformations, then reset it for creating the next shape.

 

 

 

 

 

 

 

 

 

 

 

We’ll use a very simple test case of a elbow and shoulder, since they usually have a bit of deformation trouble.

 

Before binding your surface geometry, it’s a good idea to freeze transforms and remove all history.

 

 

To keep things simple, I’ll implement a very simple joint structure with an ik Arm and spine. I’ll parent the ikHandles to poly cubes for simple selection.

Body Blend can handle very complex controls, as long as you can figure out where to drive your motion from.

 

 

All of the controls in your skeleton must be included in a character set for Body Blend to achieve its’ bind pose. This character set must include everything necessary to achieve the pose where the skin was first attached. If it is not, the shape extraction will not function properly.

 

After binding the skin, paint weights to achieve decent deformation. In this example, I haven’t painted any weights. To be certain that you are actually at your bind pose, you can test by selecting skin > detach skin, then undoing. If the skin moves at all, the skeleton is not in the bind pose and must be reattached.

 

After launching body blend, enter the name of the character set, then hit ‘set’.

Set the body blend bind pose.

 

I’ll work on the elbow first. It’s a simple joint driven fix. I’ll split the deformation into two parts, one at 90 degrees and one at 110 degrees. Pose the arm in the first location.

 

 

First, we need to set up how this will be driven. It’s important to select the joint and driver system BEFORE creating the target. Since the elbow is a simple hinge joint, it’s a good candidate for a simple one axis joint driven setup.  Select Driver > Joint, pick the elbow joint and hit ‘grab’. Then select the Y axis.

 

To determine the right axis, select the joint with the rotate tool, or look closely at the local rotation axis.

 

 

Select the skin shape, then click new. Name the blend shoulderFix. Next, you will need to name the target. I’ll call it elbowhalf. Be certain not to use any spaces in the names. Notice how the visibility toggles are adjusted to show that you are now working on the target. Model the target to achieve perfect deformation for that pose. (I’ve re-colored the target for visual clarity)

 

I’ve tried to maintain volume in the bicep, remove some of the flattening on the back of the arm, and clean up the elbow area.

 

 

Hit match. This will extract the relative blend. Notice that the visibility mode is now set to skin and the new shape is applied. If you experiment with the setup, you will see that the shape is still active in the bind pose.

Reset the pose by double clicking the target. Hit the key button while blend is set to 1.  Next, click the ‘bind pose’ button, and set the blend to 0, and hit ‘key’ again. Now test your rig to see the updated target applied. Ideally, perfect deformation.

 

 

In this illustration, colors have been added and the shapes moved for clarity. The orange illustrates the sculpted target shape, the yellow is the original skin shape with 50% of the blend applied, and the green shows the extracted world space target with the joint transformations subtracted.

 

 

While the driver section is still set to the elbow joint on the Y axis, I’ll add the next shape. The process is very similar, but this time we’ll stack a target. This method takes into account any previous blend targets that may be applied. In this case, the elbowHalf target is applied at 90 degrees, and we want to supplement that with a new target from 90 to 110 degrees.

 

Position the rig so the joint is bent 110 degrees. Click the stack button, name it elbow110stack, and a new sculpt target is added. Modify the target to fix any deformation problems. I worked to remove any interpenetration problems, then smoothed to check the result. Undo the smoothing before applying the match command. Click ‘match’ to extract the relative target.

 

 

The driven key value will be set to 1, and the pose will be elbow110stack. Key the shape in this position. Next double click the elbowHalf pose. You will see that the skin is in an incorrect pose. Be careful with the next step—double clicking assumes the pose, but also selects the target. Single click the elbow110 to select the target. Pull the blend value back to zero, the target should look good, and key.

 

 

The shoulder area is a bit more complex, since several rotations must be combined to achieve the pose. Position the arm facing straight down. I’ll select the child of the joint  that will determine vector direction, in this case, the elbow. We are trying to determine the orientation of the shoulder, but it’s child predictably provides the position of a world position vector.

 

 

Set the driver to vector, select the elbow as the End Point and the chest as the Parent. Select the chest, since the clavicle may be moving some independently. In this case the chest may be a better parent. Usually, you will select the immediate parent of the joint you are working with.

 

Since the deformation won’t be overlapping the arm, use the Add button to create the new shape. I’ll flatten the arm, as if it were pressed against the body, and try to maintain better form in the body itself. I’ll also add a little width to the arm, and keep the volume in the upper shoulder.

 

 

Hit match to extract the shape. Key the pose in the current position. The blend is driven off of the distance tool. As the arm approaches the down position, the distance will decrease. It doesn’t matter how it’s being rotated, how many rotations, or from which angle it’s coming in. When it’s in the down position, the distance will decrease.

 

 

After testing the range of motion, I want to add a bit more volume in the chest area. I’ll simply double click the shoulderDown pose, turn on the visibility of the target, and change the shape. Then, all I need to do is click the edit button to update the shape.

 

To cover the basic range of motion for the shoulder, I’ll pose the arm in the up, down, front, back positions, and stack blends on top each deformation. Again, be careful when keying blends, to be sure you have the right one active. Here are the target locations for each blend shape. 

 

Here a few select extracted world space blend targets. Notice how the elbow110 stack target is horribly deformed, but when combined with the incoming deformations, is perfect. Again, the odd looking targets are a result of extracting the joint transformations from each vertex.

 

You may need to add shapes that are not driven, then add custom controls, or hand animate specific fixes for especially difficult poses.

 

This process has fixed a good number of deformation problems, but this rig doesn’t have control for twisting the forearm, auto clavicle, or even pole vectors. Each of these areas would need to be addressed, and blends added or stacked on. When things get more tricky, you may want some targets to be dependant on others. You can hijack the outputs from the distance node, and run it thru a multiply divide node to only allow the shape to be applied if other conditions are met as well.

 

 

 It could also be a good idea to develop a simple animation with your rig, for easily testing how the blends interact. It won’t interfere with any parts of the blend process, since its pose based. To check, simply hit play to check all your difficult poses, and transitions to and from.

 

The result of this process is arbitrary deformation defined in a few key poses, then properly interpolated across the entire animation sequence.

 

 

 

 

 

 

 

UI created by Matt Schiller

Matrix math by Jeff Knox

Conceptual Development by Zach Gray, Michael Hutchinson, Matt Schiller

Documentation by Zach Gray

 

 

Developed at the School of Visual Art and Design at Southern Adventist University