Next: Gesture Control
Up: Agents and Avatars
Previous: Control for Interactivity
Providing a virtual human with human-like reactions and decision-making is
more complicated than controlling its joint motions from captured or
synthesized data. Here is where we engage the viewer with the character's
personality and demonstrate its skill and intelligence in negotiating its
environment, situation, and other agents. This level of performance requires
significant investment in decision-making tools. We presently use a two level
architecture:
- to optimize reactivity to the environment at the lower level
(for example, in the choice of footsteps for locomotion through the
space) [37,24,7];
- to execute parametrized scripts or plan complex task
sequences at the higher level (for example, choosing which room to
search in order to locate an object or another agent, or outlining the
primary steps that must be followed to perform a particular
task) [31,4].
The architecture is built on Parallel Transition Networks
(PaT-Nets) [3]. Nodes represent executable processes, edges contain
conditions which when true cause transitions to another node (process), and a
combination of message passing and global memory provide coordination and
synchronization across multiple parallel processes. Elsewhere we have shown
how this architecture can be applied to the game of ``Hide and
Seek'' [4], to two person animated
conversation [9], or to simulated emergency medical
care [10]. Currently we are using this architecture to construct
appropriate gestural responses from a synthetic agent, create appropriate
visual attention during high-level task execution, manage locomotion tasks,
and study multi-agent activity scheduling.
Dr. Norman Badler
Thu Apr 17 08:17:25 EDT 1997