Abstract
Over the last years, there has been a lot of interest in the area of Interactive Virtual Humans
(IVHs). Virtual characters who interact naturally with users in mixed realities have many different
applications, such as interactive video games, virtual training and rehabilitation, or virtual
heritage. The main purpose of using interactive virtual humans
... read more
in such applications is to increase
the realism of the environment by adding life-like characters. The means by which these
characters perceive their environment and how they express themselves greatly influences how
convincing these characters are. The work presented in this thesis aims to improve the expressive
capabilities of virtual characters, notably the animation of IVHs.
Because of the complexity of interaction, a high level of control is required over the face
and body motions of the virtual humans. In order to achieve this, current approaches try to
generate face and body motions from a high-level description. Although this indeed allows for
a precise control over the movement of the virtual human, it is difficult to generate a naturallooking
motion from such a high-level description. Another problem that arises when animating
IVHs is that motions are not generated all the time. Therefore a flexible animation scheme is
required that ensures a natural posture even when no animation is playing. Finally, because
of the many different components that an Interactive Virtual Human consists of, the animation
model should be as efficient as possible.
In this thesis, we will present a new animation model, based on a combination of motion
synthesis from motion capture and a statistical analysis of prerecorded motion clips. As opposed
to existing approaches that create new motions with limited flexibility, our model adapts existing
motions, by automatically adding dependent joint motions. This renders the animation more
natural, but since our model does not impose any conditions on the input motion, it can be
linked easily with existing gesture synthesis techniques for IVHs.
In order to assure a continuous realistic motion, a basic layer of motions, called idle motions,
is always present. These motions are generated by sequencing prerecorded motion segments
organised in a graph. The path followed through this graph is controlled by high-level
constraints, such as an emotional state. On top of that, small variations in posture are added so
that the character is never static. Because we use a linear representation for joint orientations,
blending and interpolation is done very efficiently, resulting in an animation engine especially
suitable for real-time applications.
show less