Procedural Muscle and Skin Simulation

Zachary Gray – Michael Hutchinson



This paper presents a method and workflow for developing anatomy based character animation. The process leverages existing Alias Maya technologies such as skeleton-driven deformation, soft body particle animation, particle collision detection, dynamic spring  systems, shape interpolation, and Boolean modeling. An anatomical representation of the character is created using muscles, bones, and fatty areas. These anatomical structures are driven by dynamics and a high level of skeletal control. A skin is then calculated to match the muscle understructure. By nature, a dynamic system will progress toward entropy. This new method limits entropic interference by constantly providing ideal initial conditions for simulation on a per frame basis. The simulation output is then filtered to remove any periodic noise, and surface detail is introduced using a deformation tool based on the surface normal. This approach differs from most approaches where the final shape of the external skin is actually defined by the shape and movement of underlying bones and muscles. As the results demonstrate, the system can be used to create anatomical character animation. The system is not intended to be completely anatomically accurate but to provide a visual approximation of muscular and surface anatomy.



1.                 Introduction

    1. Relationship of research to the animation art form
    2. Purpose of research
    3. Simulation process

2.                 Background and Related Work

3.                 Muscle System

4.                 Skin System

    1. Isolating each simulation frame
    2. Monocoque collision
    3. Pose space deformation
    4. Muscle expansion and skin contraction
    5. Segmenting the simulation
    6. Periodic filtering
    7. Normal deformation

5.                 Conclusion and Future Work

6.                 Acknowledgements

7.                 Works Cited

8.                 Figures (Inline)

9.                 Appendices

    1. DVD visual demo
    2. DVD mel scripts and C++ code


1.                 Introduction

Relationship of research to the animation art form

Animators carefully study and observe the process where thought becomes action. Animators strive to capture the thinking process of their character and then translate those ideas into motion. Essentially, all that is left in the motion or action is a shell, or an external representation of the thinking process which created the initial motion. A final animation performance would be weak without the thinking process. The animator seeks to reverse engineer actions from thoughts. The viewer is presented with only the summation of all the work, failed attempts, and thinking that drove the process. Ultimately, that final output is most critical.


Effects artist Habib Zarbapor discusses process and output:

Keep your eyes and mind on the output. Don't get caught up in the technical details. A lot of things tend to get overlooked when we focus ourselves on the process rather than the output. This keeps your mind above the confusion of setting up the technical process and guides you to simpler ways of achieving the effects you want. Sometimes we get overwhelmed by complexity and forget how simple it was to pull off the whole thing.(Dickreuter)


While the technical process is secondary to the performance, each level of thinking or input eventually drives the output and increases its believability. The achievement of technical quality promotes suspension of disbelief (Thomas, Johnston 13-14).  The achievement of the animation, textures, lighting, modeling, and rigging all supplement the viewer’s experience. As productions become more advanced, the level of proceduralism needs to increase to allow the animator to concentrate on the performance, and levels of complexity must be layered in automatically.  This ultimately serves the viewer and increases realism by supplementing the output.


As artists pursue proceduralism in character animation, the inevitable conclusion leads to anatomical modeling and simulation. A simplified or stylized cartoon character will not benefit from anatomical simulation, but a realistic character can be made more believable by incorporating the simulation process. The most widely used and fully developed methods for character skinning consider only the external shape. In many cases this is adequate and appropriate, but it does not account for complex motion of the anatomical systems that drive the external skin (Fig 1. Gray). By adopting a skinning model that accounts for individual bones, tissues, and muscles covered by an elastic skin, realism can be increased and serve the performance (Wilhelms).


Fig.1 (Gray) Final render of simulation process.



Purpose of Research

Our goal was to add secondary levels of detail to the motion of the skin surface to ultimately improve the final output. Rather than attempting to create an overwhelming and perfect system from scratch, many existing tools and technologies were utilized to  create a system that achieves a visual approximation and pleasing result.


Several identifiable characteristics are evident in human or animal skin deformation.


1.                 The gross motion of the skin predictably follows the skeletal structure that drives it. The skin in close proximity to a bone will maintain that general positional relationship as the bone moves. Even though the position of bones is driven by muscles, the bone establishes a rigid connective structure, and ultimately, the motion of muscles and skin are relative to the positions of the bones (Aubel 15).

2.                 Local deformation effects of the skin are affected by muscle tension or relaxation. As a muscle contracts along its length, the volume of the muscle is maintained, and it expands around its girth. Muscle tension also drives local vibration effects. The effects of gravity, inertia, and acceleration are reduced on a muscle under tension but are more evident in a relaxed state.

3.                 Due to the existence of the musculoskeletal system, volume is maintained throughout the entire range of motion. Masses can shift, but the overall volume remains constant.

4.                 The solid skin surface prevents skin interpenetration.

5.                 As the muscles and bones move beneath the surface, elastic skin stretches and smooths out, or contracts and wrinkles.

A believable human or animal animated character should exhibit these characteristics. Observing the visual characteristics present in reality allows us to implement them in the simulation.  With traditional binding methods, only the gross skeletal motion is considered. A hierarchical joint system is created and manipulated by the animator. Then, a correspondence is established between vertices on the skin and the underlying bone structure. The transformations are simply applied to the skin for each skeletal pose. This works very well for the gross motion of skin but ignores the other secondary movement characteristics of skin. Recognizing that the lack of detail produces a less believable result, our skin model will use traditional transformation skinning to achieve gross motion, followed by the secondary deformations. The system will account for muscle tension, skin tension, and consistent volume (Aubel 16).


In addition to oversimplifying the skin system, traditional models are deformation systems rather than simulations. A deformation system statically calculates the skin components based on incoming transformations. The advantages of a deformation system are speed, nonsequential calculation, and predictability. However, simulation is periodic in nature, where calculations from the previous frame influence the current frame. While a deformation system that accounts for secondary motion may be more production worthy, our simulation system pursues a time sensitive simulation. Unlike many muscle skin approaches that drive the skin using a deformer approach, our skin system derives the external skin shape completely from the procedural muscle and bone understructure.


Simulation process

Our approach centered on leveraging existing technologies available in Alias Maya and other mainstream commercial animation software packages. We speculated that we could utilize prior development to make the end product driven by a visual aesthetic but make the core concepts of the process accessible to users of other packages. We considered the approach of Stephen May that is based on artistic anatomy: “…By analyzing the relationship between exterior form and the underlying structures responsible for creating it, surface form and shape change may be understood and represented best.” (May 1)


Using tools commonly available in 3D packages, and specifically Alias Maya, this production workflow is followed to create the dynamic muscle and skin system (Fig.2 Gray).


1.                 Develop a rig and joint system for top level control.

2.                 Model and rig approximations of individual muscle masses, tissues, and bones.

3.                 Define the global motion of the skin.

4.                 Simulate surface level skin effects based upon muscle and bone rig.

5.                 Modify simulation data to reflect original designs.

6.                 Texture and render the output.

Fig.2 (Gray) Simulation stages: initial muscle animation, monocoque surface generation; unified mesh, detail areas re attached, final sculpting.



2.                 Background and Related Work

The ideologies of anatomy based deformation have consistently outclassed the abilities of hardware and software systems. “The complexity of simulating the human body and its behavior is directly proportional to the complexity of the human body itself, and is compounded by the vast number of movements it is capable of” (May 1).  As a result, artists and engineers have developed more direct approaches to generate the visuals required.


The vast majority of skin deformation systems that are implemented in commercially available packages use a weighted joint transformation model. This works quite well for gross skeletal movement since each skin vertex location is computed based upon a weighted input of transformations from the joints. However, it does not account for surface details common to natural skin deformation. Free form deformers (lattices) are applied to create local external bulging and joint deformation effects. These deformations are not motivated by underlying muscle changes and apply only static changes.


A few commercial developers have recently entered the market with skin deformation tools. cgCharacter (cgCharacter) (Fig.3 cgCharacter) and Di-o-matic (Di-o-Matic) (Fig.4 di-o-matic) have introduced  parametric muscle primitives with deformation based skinning systems for Discreet Max. The muscle primitives have attributes for collision, tension, and attachment. The parametric muscle shapes influence the skin using muscle cross sections or by using a ray casting approach. MuscleTk developed a muscle and skin system influenced by Thuriot and Miller’s work  that uses parametric muscle bellies to deform the skin. (cgToolkit) (Miller Thuriot) (Fig.5 muscletk). The Comet Muscle System for Alias Maya uses a similar approach to deform the skin but operates with most mesh objects (Comet) (Fig.6 Comet). All of these products are deformation packages rather than simulations.


Fig.3 (cgCharacter)


Fig.4 (Di-o-matic)


Fig.5 (


Fig.6 (Comet)



Commercial animation studios have also developed and implemented custom muscle and skin systems. One of the first major uses of muscle-skin simulation was developed by The Secret Lab. They implemented their system in the ground-breaking Dinosaur. The mass and weight of the dinosaurs was simulated using rudimentary dynamic geometry to influence the motion of the skin (Dinosaur) (Fig.7 Dinosaur). PDI/Dreamworks Animation also uses an in-house skin deformation system for Shrek (Shrek) (Fig.8 Shrek). Weta Digital developed a muscle based deformation/influence system for their Lord of the Rings character pipeline (LOTR) (Fig.9 LOTR).  Industrial Light + Magic implemented a skin simulation for Jurassic Park III for the dinosaurs. In some cases, ILM was able to achieve better deformation because the skin was able to stretch across complicated areas (Jurassic Park III) (Fig.10 Jurassic). Pixar applied a muscle mass deformation system for its recent film The Incredibles (Incredibles) (Fig.105 Incredibles).



Fig.7 (Dinosaur)



Fig.8 (Shrek)


Fig.9 (LOTR)


Fig.10 (Jurassic ParkIII)


Within these in-house deformation systems, ILM made an additional advance. The effects team avoided the massive computational needs of a full muscle rig on Hulk by driving muscle shapes with a pose shape system. To achieve direct control of the output geometry and the aesthetics of muscle and skin movement, over 900 pose space shapes were modeled and rigged (Fig.11 Ferguson).  Next, a skin was relaxed over the surface using a reworked in-house cloth simulation to create sliding effects. Elastic springs connected to the original pose shape model kept the dynamic skin in line with the skeletal movements of the character (Ferguson).


Fig.11 (Ferguson)


Of all these approaches, the ILM Hulk approach produced the most convincing results. Their goal was to create a production worthy aesthetic solution rather than a completely procedural solution, and the look triumphed. By adding another layer of procedural control, their simulation strives to emulate and extend this aesthetically driven method. Again, the success of the final output drives the aesthetics.


3.                 Muscle System

In our implementation, we based our approach upon previous research that was practical rather than theoretical while leaning more toward simulation. Previous research has modeled musculature using simple parametric shapes. Many muscles are fusiform or tapered at each end. These muscles contract in straight lines and are often simply represented by scaled spheres (May 4). More complex muscle shapes can be represented by multiple fusiform muscles parameterized along a spline curve or bicubic patch shapes.  There are significant computational advantages to this approach, but the representation of deformed shapes becomes very difficult.


Our muscle skin implementation attempts to emulate but simplify this process. Rather than working with individual muscle bellies, muscle masses with arbitrary polygon topology are modeled. This non-parametric approach allows local arbitrary shaping according to character designs. The fusiform model is approximated by tapering the ends of long muscles, while large muscle masses with less gross motion are abbreviated and combined. By approximating the anatomy, our implementation can indicate sub skin movement without resorting to the tedious process of creating and rigging each muscle.


As in previous muscle models, our implementation requires the creation of a reference vector to determine orientation of the muscle attachment points (May 4). These reference vectors inherit joint transformations and maintain consistent directionality in the muscle shape. The ends of the muscle shape are parented to the control skeleton and will always stay attached. The character is modeled and rigged in a neutral pose. Muscle contraction is driven by deviations from this rest position during simulation.


Muscle tension and contraction is approximated in our system by using simple weighted joint transformation deformation on individual muscle masses. But this approach does not account for constant volume. Miller and Thuriot create a simple function to maintain volume in the individual representation muscle belly. (Miller Thuriot) Their approach to maintaining volume does not account for muscle to bone or muscle to muscle collisions that offset the expansion of the muscle. Yet, following a visual approach, muscle collisions are not necessary considering the extended computation time. Sculpting target shapes for flexation determines the offset volume. This allows us to position the deformation along the length of the muscle mass with an appropriate offset to approximate sub masses and collisions.


In an effort to approximate muscle interaction further, foundation muscle motion is established, then surface muscles are attached relative to the foundation muscle deformation. Additionally, individual vertex transformations of muscle masses can be rigged to influence nearby masses.


The soft tissues in muscle masses carry an inertial force from their movement. We implement an elastic spring model in Alias Maya called a jiggle deformer. This deformer allows the effect to be reduced based upon a grayscale map. This map reduces the local influence of the deformer as it approaches the ends of the fusiform muscle shape. This deformer is also used for larger flatter muscle masses that don’t slide much, but react to inertial forces.


Our muscle system consists of arbitrary polygon shapes rigged to underlying bones and other muscles with artist controlled flexing.


4.                 Skin System

The outer skin surface is the most elusive and difficult to replicate in a realistic manner. It is the only item visible during rendering, and any problem in the underlying system will be inherited by the outer skin layer. This outer skin layer also has the most complex properties, as it changes characteristics from one area of the model to the other. It was important for us to develop a truly dynamic outer layer, even if this meant creating a less predictable result. We did not want to simply influence or drive the final skin position, but simulate it. There are other more predictable methods of showing the effects of muscles under skin, but we wanted to create a system where the muscles and bones would actually define the outer shape of the character.


A common Maya soft body approach is followed to create an elastic surface where each vertex of a surface is connected to a particle. The particles’ start position is derived from the soft bound skin mesh position, and then the dynamic simulation controls the movement of the mesh. Soft body animation can be controlled by goals, forces and springs. To transfer the local deformation effects of the muscles to the skin layer, the skin surface is set to collide with the animated muscle masses, and the particle system is attracted to a topologically identical goal surface. The collision engine tessellates all surfaces into polygons, and more predictable and stable results are achieved by animating polygons rather than tessellating NURBS surfaces on the fly.


After a very involved testing process, with successes and failures, we observed a common thread in all the simulations. The second law of thermodynamics states that the entropy (a process of degradation or running down or a trend to disorder) of the universe increases over time. The simulations would begin beautifully and then degrade over time. What if entropy was controlled in the skin system by restarting the simulation at every frame?


By calculating the gross skeletal movement for the outer skin using traditional deformation, we were able to introduce a high degree of stability into the system. The challenge shifted from creating perfect initial conditions to repeatedly creating ideal conditions. If there is a problem with one frame, it can be addressed individually. If there is a problem with a set of frames, input parameters can be adjusted, and then the simulation would be rerun for that group of frames. The periodic nature of the system is reintroduced by relating the last previous successful frame to the current simulation.


When driven by a properly rigged muscle understructure, this process creates a visually pleasing approximation of local skin deformations. Since a core philosophy of this project was to utilize existing tools, the focus of the project became stabilizing the process to work in a range of input conditions.


Maya’s collision engine is extremely sensitive to initial conditions and has no recovery mechanism from collision errors. As such, the initial state of the simulation must lie completely on the exterior of the collision surface. The engine has no method for determining what is inside or outside of a mesh. Once a particle errors out, it will collide with the inside of the surface and become trapped, creating an error in the simulated surface.


A multi-tiered approach was developed to tackle this problem.


1.                 Isolating each simulation frame.

2.                 Creating a Monocoque collision surface.

3.                 Developing pose space deformation tools to properly set initial skin positions.

4.                 Establishing a discreet muscle expansion and contraction stage for influencing the skin.

5.                 Isolating the simulation for appendages and high detail areas.

6.                 Filtering the dataset.

7.                 Applying modeling changes to the simulated sequence.


Isolating each simulation frame

Constantly resetting the simulation allowed the stages to be made independently but also creates very large data sets. We developed a custom plugin to read and write complex geometry sequences to disk. Vertex locations and face-edge connections are saved to a binary file and then loaded into a Maya shape node on a per frame basis. This frees up memory for calculation and removes the limit to the length of a simulated sequence.


Monocoque collision

Most collision errors occur when there are opposing collision surfaces that trap a vertex in an extremely acute angle. The engine will allow the vertex to pass beneath the collision surface where it becomes trapped. By softening the incoming angles, then gradually making them more acute, the particle is allowed to slide on the surface and avoid becoming trapped. In order to soften the incoming angles using an average vertices function, the muscle masses must be combined into a single surface. Polygon Boolean union operations combine the surfaces. As muscle masses move, the topology of the collision surface changes over time. This does not create a problem since the simulation conditions are reset on each frame.


Boolean union operations are complex and prone to errors. To calculate a combined surface properly, the two input meshes must have no self intersecting geometry, no coincident geometry, no border edges, and significant face area. The operation is fairly stable when given two input surfaces. When the output from the first operation is processed by a second operation, failure is almost inevitable.


A component of the Boolean union failure problem was due to mathematical precision errors. By scaling the input meshes by a factor of 1000, the decimal precision is gained to calculate Boolean operations of meshes with small face areas.


Multiple Boolean operations can still create coincident geometry. To remedy this problem, we developed a recursive jitter algorithm to randomly translate the input shapes causing Boolean failure until the offending condition is removed. A spherical random function is employed to move the input mesh, while the output of the operation is sampled for success. The algorithm will iterate through all of the Boolean operations until a final mesh is successfully generated. This workaround allows successful Boolean operations with more than 40 input meshes (Fig.12 Gray).


Fig.12 (Gray) Boolean union simulation groups.


Pose space deformation

Achieving proper input conditions from the skin shape required implementing pose shape deformation. In additional to joint weighted deformation, we needed to be able to arbitrarily determine the start shape of the pre-simulated skin at any pose. Pose shape deformation operates by sculpting a target shape at a given pose, then applying it based on joint orientation. This implementation is similar to the ILM/Hulk approach. We do not use pose shape deformation to generate the final mesh but to guarantee proper per frame input conditions (Pose space) (Ferguson).


An up vector rig indexes the shapes. In most cases, applying the pose as a function of joint rotation can be inconsistent. There can be several combinations of XYZ, ZYX, or YXZ rotations to  achieve a specific pose. To achieve the pose more predictably, a spherical directional ‘vector’ model is referenced for determining joint orientation. A pose is represented as a point on the surface of a sphere and the joint at the center of that sphere. As the joint changes orientation, it will either become closer to or further from that ‘pose point’ (Fig 13. Gray).


Fig.13 (Gray) Vector driven blend shapes update predictably throughout range of motion.


The vector model provides a predictable, three dimensional model for determining joint position. The child joint is used for determining the current vector. Selecting the child of the joint will determine the orientation. In order for the vector to update properly, it must be parented in the hierarchy above the joint that determines the orientation.


Prior to binding a skin in Maya, local vertex transformations occur in world space. When a point is moved along the Y axis, it is translated along the world space Y axis. After binding, each vertex inherits the blended weighting of each of the transformations of the input bones. When attempting to move a point along the Y axis, it moves in an arbitrary axis. Instead of moving in world space, it now moves in local vertex coordinate space. Now, each vertex is translating in its own local coordinate system that is derived from the input transformations from the bones. A workaround is to duplicate the target at the bind pose, move the character to the problem pose, and then sculpt the duplicated object to achieve the result. This still does not solve the altered coordinate space problem. Sculpting the target is very difficult and counter intuitive since the local coordinate axes are not aligned with world space.


Rather than try to extrapolate and reverse engineer all of the input transformations, the local coordinate system of each vertex is considered to determine the rotations to align that coordinate system to world space. Then a copy of the model can be manipulated in world space, and the inverse transformation can be applied to the original bound shape. This operation occurs at the problem pose.  The input transformations are then removed by resetting the character back to the bind pose (where the joint transformations line up with world space coordinates), and extracting the result. Using this method, targets are quickly extracted. The Maya tweak node stores post-skin transformation data. It is used to temporarily hold the local transformations and reset for creating the next shape.


The system allows a pose shape to be relative to preexisting blend targets, so the final result is the addition of all the pose targets. The difference between the existing targets and the new target is removed for the specific pose. This works by reordering the tweak, then rolling off the envelope of the incoming blends to extract the stacked shape (Fig.14 Gray).


Fig.14 (Gray) Custom pose shape deformation in Maya.



Muscle expansion and skin contraction

To create stable input conditions for the collision engine, an inside and an outside of our monocoque surface is established by guaranteeing that the collision surface is completely contained within the skin shape. To increase the accuracy of the process, the simulation is divided into two discreet stages. The muscles expand from a reduced state to their normal state, displacing any areas of the skin surface that come into initial contact. Next the skin surface is pulled down onto the monocoque surface and assumes the shape of the input muscle masses using particle/mesh collisions. In both cases, the soft body animation is also influenced by elastic springs that cause the mesh to equalize and relax across the surface. Particle goals are used to stabilize and control the motion of the particles and collisions. The particle goal weight is determined per vertex to account for more or less sliding.


In order for the muscle masses to expand, a unique topologically identical shrunken model is established for each muscle. The shape is modeled to match the control skeleton to establish a buffer from the skin surface. Since the Boolean process for creating a monocoque collision surface is computationally expensive and relies on very particular input conditions, the simulation required implementing a method to deform the unified monocoque surface to match the shrunken state of the individual muscles. The Boolean operations change the topology of the resultant mesh, so it is difficult to establish shape connections based on the vertex indices. Instead, connections are based on proximity. The monocoque surface position is pulled into the shrunken position using elastic springs.


This works well for vertices that are not near surface intersections but does not account for newly created geometry. The Boolean operations alter the vertex face normals of the output surface. These vertices are isolated, and their position is interpolated based on the direct proximal connections.


As the monocoque collision surface expands to match its original shape, a dynamic collision is created between the surfaces. To control the expansion, a particle goal is created that retains the original input conditions of the skin shape. The output from this stage guarantees ideal input conditions for skin contraction. The modified skin shape is pulled by a particle goal into the collision surface, where it slides and relaxes into a final shape.


Segmenting the simulation

Maya’s soft body collision engine does not allow per particle collision assignments. To avoid calculating collisions for areas that have no spatial proximity, the simulation is segmented into overlapping relative areas. For example, the skin from the right arm will never need to directly interact with the skin from the left leg. This process also allows for more local complexity and faster calculation.


Segmenting the character for simulation is relatively simple. A master character is created and then simulation groups are built. Bringing the surface back into one unified mesh for texturing and rendering is quite complex. Slight discrepancies in the simulation do not allow the borders of the segmented areas to exist in perfect alignment, and the border condition slightly alters the simulation results. We developed a method to blend between topologically unique segments of geometry in a paintable per vertex map (Fig.15 Gray).


Fig.15 (Gray) Blended stitching of simulation segments.


The unified mesh is split into transition areas and sub-segments. Transition areas completely include the sub-segments overlap area. A distance check is performed from vertices of the transition area to the unified mesh. When a set of vertices is determined to share the same world space coordinates, the vertex locations of each of the sub-segments is fed into a node that outputs a weighted average of the result. The weight is derived from a map that is painted on the transition surface, where a grayscale value represents ownership of respective sub segments. The non-overlapping sub segments are directly connected to the vertex positions of the unified mesh.


Once the relationships are defined on the unified mesh and reference surfaces, the world space vertex locations are copied from the simulation data, and the unified result is extracted and saved.


Periodic filtering

Creating a monocoque surface via Boolean operations introduces a shifting topology on the collision surface. The shift creates a frame to frame instability in the final simulation. The inaccuracy appears as a high frequency jitter on certain areas of the surface. To mitigate the jitter effects, a custom periodic filter is employed.


An elastic spring is created between each vertex from the immediately preceding and following frames. Since the spring is created in a straight line, if the position of the vertex on the processed frame is predictably in line with the positions of the related vertices, no change will be made to the vertex position. If the vertex is anomalous, its position will be updated to reflect the surrounding frames (Fig.16 Gray).


Fig.16 (Gray) Periodic filtering as applied to a single point on a mesh.



Applying modeling changes

In some cases, the output from the simulation may vary slightly from the intended design. We developed a method to reintroduce surface detail that would be driven by the simulation output. Traditional blend shape deformation operates in a linear fashion between a set of vertices. The transformation operates in world space, or relative object space. There is no object level transformation on the output from the simulation. All of the data is stored as world space vertex positions. As the object changes orientation, the only method for determining a relative offset from the surface is to reference the surface normal (Fig.17 Gray).


Fig.17 (Gray) Approach to post deformation blend shape application.


Vertices on a static sculpted surface and a corresponding vertex on a static simulated reference surface are sampled. An angle between function delivers a rotation vector and magnitude to rotate the reference vertex into the sculpted vertex. This rotation is then applied to any frame of the simulated sequence.


5.                 Conclusion and Future Work

The most notable obstacle to procedural character rigs is the time investment required to create the anatomical structures and calculate the final surface.  Even though the understructure is a rough approximation, it requires hundreds of time consuming setups. This creates an implementation overhead that makes a practical use difficult. The rigging time is very significant regardless of which system is used: pose space, influence shapes, or full muscles (Fig 18. Gray).


Fig.18 (Gray) Final render simulation frame.


Many of the rigging functions were automated, but attachment points, tension, and muscle profile all needed to be tested across a range of motion. A robust and versatile system to simplify the modeling, attachment, and tension would be necessary for a production ready system.


The simulation time on complex segments became significant and challenges the production viability. Without the creation of a unified plugin, calculation will remain slow. The application of this process in a production pipeline would require API conversion of the mel scripts. Any further significant improvement would require building the system from the ground up (Fig 19. Gray).


Fig.19 (Gray)


Fig.19 (Gray) Simulation interface.


For muscle and skin simulation as part of a procedural rig to become practical for production, a marriage of research and artist-friendly tools must take place. Any system must provide fast and accurate feedback in the setup and simulation. Without timely feedback, the system will require a scientist rather than an artist. As we pursue dynamic and believable performances in animation, the level of sophistication will increase and will require the development of procedural secondary motion.


6.                 Acknowledgments

This project benefited from significant contribution from Michael Hutchinson. He  was involved from the onset of the project and provided consistent programming assistance. He was instrumental in developing and testing the general approach and muscle rigging technology. Hutchinson defined the muscle articulation for the proof of concept and created the plugin implementation of the sculpt tool.  Several initial concepts of the pose space script were additional key contributions.


Jeff Knox developed the geometry sequence exporter and importer which allowed us to work with large datasets. Knox also created the matrix transformation math for the pose space script and suggested the core technology of the script based sculpt tool. Matt Schiller developed user scriptable high level animation controls and developed the user interface for the pose space script. Additional thanks to Jeremy Chinn, Jeff MacNeil, and Aaron Adams for assistance with the proof of concept scene.







7.                 Works cited



Aubel, Amaury and Daniel Thalmann. “MuscleBuilder: a modeling tool for human anatomy.” J. Comput. Sci. Technol. 19.5 (2004) 585—595.

Aubel, Amaury and Daniel Thalmann. “Efficient Muscle Shape Deformation.” DEFORM '00/AVATARS '00: Proceedings of the IFIP TC5/WG5.10 DEFORM'2000 Workshop and AVATARS'2000 Workshop on Deformable Avatars. 2000. 132-142.


“CG Toolkit Presents Muscle TK.” 2004. Cg Toolkit. 10 April 2005. <>


Comet,  Michael. “Comet Muscle System for Maya.” 2004. Comet Cartoons. 10 April 2005. <>


Dickreuter, Raffael,  and Bernard Lebel, Will Mendez. “Interview with Habib Zargapour.” 2004. XSI Base. 28 Jan. 2004 <>


Dinosaur. "Featurette: Building A Better Dinosaur." Dir. Eric Leighton, Ralph Zondaq. 2000. DVD Disney. 30 Jan 2001.


Ferguson, Aaron P. “Skin Deep Beauty: A Production Friendly Creature Geometry Pipeline Used on "Hulk".” SIGGRAPH '03: Proceedings of the 30th annual conference on Computer graphics and interactive techniques. 2003 10 April 2005. <>


Jurassic Park III (Widescreen Collector’s Edition. "Featurette: The Tech of Shrek." Dir. Joe Johnston, Anim. Dir. Dan Taylor. 2001. DVD Universal Studios. 24 Aug 2004.


Lewis, J. P. and Matt Cordner, Nickson Fong. “Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation.” SIGGRAPH '00: Proceedings of the 27th annual conference on Computer graphics and interactive techniques. 2000 10 April 2005. <>

Maya Techniques: Custom Character Toolkit. "Implementing a Deformer with Paintable Weights." Erick Miller and Paul Thuriot. 2004. DVD Alias.


“Product Description – Absolute Character Tools.” 2004. 10 April 2005. <>


“Products > Hercules > Overview.” 2004. di-o-matic. 10 April 2005. <>


Scheepers, Ferdi and Richard E. Parent, Wayne E. Carlson, Stephen F. May. “Anatomy-based modeling of the human musculature.” Proceedings of the 24th annual conference on Computer graphics and interactive techniques. 1997 Aug. 10 April 2005. <>   


Shrek. "Featurette: The Tech of Shrek." Dir. Vicky Jenson, Andrew Adamson. 2001. DVD Dreamworks. 2 Nov 2001.


The Lord of the Rings – The Fellowship of the Ring (Special Extended Edition). "Disc 4: ‘From Vision to Reality’ Bringing the characters to life" Dir. Peter Jackson. 2001. DVD NewLine. 12 Nov 2001.


Thomas, Frank and Ollie Johnston. The Illusion of Life Disney Animation. New York: Disney Editions, 1981.


Wilhelms, Jane. “Anatomically Based Modeling.” SIGGRAPH '97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques. 1997. 10 April 2005. <>