Emergence Software Design: Technical Overview
Eitan Mendelowitz

The Emergence software system is designed to meet three goals. The first goal is the creation of an engaging interactive 3D environment. For an environment to be truly immersive it must support a high frame-rate (over 15 frames per second), be visually and aurally appealing, and have objects in the environment that act and react to interaction and other events in real time. The second goal is to allow designers control over the world without learning a full-blown programming language. Finally the Emergence system is designed to be modular and scalable.

Logically the system can be divided into five modules: the graphics module, the physics module, the artificial intelligence module, the scripting module, and the networking module. Each module in designed independently and interacts with other modules through well-defined interfaces. This allows for continual and incremental improvements and development. Using familiar tools, designers are able to access and control all five modulus. By engineering the system to meet these three goals Emergence proves to be a versatile and exciting platform for designers to create compelling, alive, virtual worlds.

The Graphics

The graphics module was designed from its inception to support immersive real-time 3D environments. The Emergence graphics engine makes the most of available consumer-level OpenGL acceleration to provide performance that rivals high-end workstations. The graphics engine renders at over fifteen frames per-second an average of 5500 polygons.

With the new version of Emergence we developed a unique system for the generation of the world’s landscape. Using the paint program of their choice, designers create the topography of the landscape by painting a two-dimensional map that represents height elevations and irregular terrain through grey scale values. This creates a height map that includes terrain coloring. The Emergence system then interprets these paintings creating, a 3-D model with 131072 vertex colored polygons.

Using modeling and animations programs such as Softimage ad 3D Studio Max, designers are able to create pieces of architecture, inanimate objects, and animated characters to populate the world. Once imported into the Emergence system these pieces of architecture, objects, and creatures have 3D models with an average of 300 texture-mapped polygons. If desired the models can be animated, contain multiple light sources, and/or contain particle systems. The Emergence system can support a world populated with scores of such models.

The Physics

The physics module has two tasks, collision detection and force application. Every solid object (including architecture and creatures) in the world (i.e. every thing in the world with which an object can collide) has a collision model. Often the collision model is a simplified version of the 3D object and character models created by the designers. If the collision boxes of two objects are interpenetrating they are said to be colliding. Collisions are computed using hierarchical bounding boxes allowing for quick and efficient collision detection even for complex objects.

All objects in the world may be subjected to physical forces. The designers can choose what forces should be applied to a given object in a world. The designers are given a standard set of forces from which to choose (e.g. friction, gravity, and drag). In addition to choosing which forces an object is subject, designers can set the physical qualities of an object (e.g. mass, coefficient of friction, elasticity).

For example, if designers wish to create an immobile object (such as a building) they would not apply any forces to it (an object with no forces does not move in either the real world or an Emergence world). If they wish to have an object move, a designer can apply the standard set of forces (e.g. friction, gravity, drag, collision etc). A designer could decide to make an object weightless by not applying gravitation forces.

The physics engine runs on a separate thread than the graphics engine. Depending on the computer, threading allows the physics engine to run at a higher frame rate than the graphics engine; resulting in more realism through super sampling. The physics module provides the graphics module with the location and orientation of object in the world.

The Artificial
Intelligence Module

Creatures (objects that are able to act) are given "minds." A mind interprets external stimuli, arbitrates between different goals, and takes actions. The creatures in Emergence are situated in their world and only have knowledge of the world though their senses. They see objects within their view-frustum, hear sounds that are audible, and feel objects they are touching.

In the same manner designers are given a selection of physical forces which act on objects in the world, designers are given a selection of high and low level goals that determine the behavior of a creature. Examples of low level goals are don’t move, turn-left, speed-up. Examples of higher level goals are avoid-collision and follow-creature. If a creature has multiple goals, more complex emergent behavior can result. For instance, if a group of creatures are given goals to both follow members of the group and to face the same direction as their peers in the group flocking behavior results.

Not only can designers choose which goals a creature has, but also they can decide how important each goal is to a given creature. For example, the designer can decide that creature A wants to follow creature B but wants to avoid creature C. The designer can decided that A wants to avoid C more than it wants to follow B. The mind then arbitrates between the different (and sometimes conflicting desires). The resultant behavior would be that under normal circumstances A follows B except when A is near C (since A wants to avoid C).

In addition to choosing goals, designers can choose between different types of "minds". The simplest mind would be one that always chooses the most important unfulfilled goal to fulfill. Most creatures in the Emergence world use a more complicated mind that averages different desires together (according to their relative importance).

Once the goals are arbitrated by the mind, a creature must decide how to obtain its goals. Goal acquisition is handled through a motor system. Because Emergence supports different types of creatures (those that fly, those that are restricted to the ground, those that can climb walls, those that can’t) different types of creatures must obtain their goals through different means. Each creature is given a motor system that is appropriate to its type and decides to move in ways that can best achieve its goals. Once an action is decided upon, the motor system exerts a force that is applied to the object by the physics engine. (Just because a creature decides to move forward does not mean it can, there may be a wall or an obstacle in its path).

All decisions are made in real time (over 15 times per-second) in response to users, other creatures, and the Emergence environment. Because of this interactivity, no two experiences in the Emergence system are the same. While creatures exhibit the same behaviors (as defined by the designers) they act in ways appropriate to their situation which is always changing.

The Scripting

A central feature of the Emergence system is the behavior scripting language that enables designers to define the personalities, behaviors, desires, relationships and unique characteristics of objects and characters in a virtual world.

The scripting language is modeled after a finite state machine (FSM). A finite state machine is defined by states and transitions between those states. In the Emergence scripting language, when a script enters a state it executes list of sequential statements that modify objects in the world. When the last statement is executed it is said to be in the state defined by the statements. Once in a state, the script monitors the world until one of its transitions statements are satisfied. When a transition statement is satisfied the script enters a new state (and executes its statements). Every object in the world (including the world itself) can have multiple scripts associated with it.

The following types of statements can be executed upon entering a new state: create and/or destroy objects, enable and/or disable controllers (such as physical forces and goals or desires), play sounds, print a statement to the console, spawn a new script, send a message to another script, play an animation, pause for a given amount of time, and change a property value. Property values determine physical qualities (such as mass) as well as behavioral qualities (e.g. relative importance of different goals). The ability to change property values offers significant control and flexibility to the designers.

Once in a state the script waits to see if any transition statements are fulfilled. These transitions are mainly event based. They can be triggered when an object collides with another object, when it receives a message from another script, when it is done with its animation, when it is done playing a sound, or when it sees something (if it can see). Transition statements can also be triggered on a timer (e.g. once every ten seconds).

Once a transition is triggered truth statements are checked. For instance, if you want a creature to stop (and play dead) when it sees a predator you can have the vision transition statement check to see if the object in view is a predator. If the transition is satisfied (if the creature sees a predator), the script can enter a new state that sets the "importance" property of the don’t-move goal very high.

The scripting language allows designers to change the Emergence world while it is running (rather than just statically setting values before entering the world). For example, running into a normally friendly creature can make it "angry" and which point it may change its animation and gives off a sound expressing its displeasure. This allows for the powerful capability to not only define initial behaviors but to change a creatures "feelings" (e.g. desires, goals, behaviors, likes and dislikes) based on interactions or others events.

The first version of the scripting language was available to designers in June 1998, a few weeks before the interactive art installation at Siggraph ’98. The designers had the opportunity to test the scripting language as we prepared the installation. It proved to be powerful and effective. Based on this initial experience, the research team will continue to make modifications and enhancements over the next few months. The scripting language is the key interface to the emergence system and requires extensive use and feedback to fully exploit its capabilities.


Networking is the newest addition to the Emergence system. The Emergence networking is designed to be peer to peer and uses TCIP (the standard Internet protocol). While each machine on the Emergence network has a local copy of an object (to reduce bandwidth used by data transfer) only one computer on the network has "ownership" of that object. The machine with ownership of an object broadcasts changes in that object’s state (be it position or property values) over the network to other participating machines. The network is tolerant to lost packets, if information about an objects positions is lost the local machine interpolates the objects current position until accurate position information is received.

At Siggraph ‘98, the Emergence system displayed a panoramic first person view of the Emergence world using three networking computers. Each computer received information about the world and camera positions over the network and each rendered different views resulting in one wide-screen image.

In the upcoming year we will expand the networking capabilities and explore the design of multi participant environments.


The Emergence system allows designers to create complex virtual 3D environments filled with artificial life that reacts and responds to a multitude of situations and interactions. In addition Emergence provides real-time interactivity on an Intel platform at a performance level usually only seen on high-end workstations.

Technical Points
of Interest

Graphic software supports:

  • Modular design
  • Open GL
  • Graphical overlay of scripting and performance data
  • Real-time performance on PCs
  • Peer to peer networking
  • Collision detection
  • Particle system
  • Hierarchical 3D animation
  • Multiple camera viewpoints
  • Multiple moving light sources
  • Vertex coloring
  • Shadows
  • Texture mapping
  • Vertex coloring
  • Stereo sound

Artificial Intelligence/Designer Interface supports:

  • 3D physics
  • High-level modular behaviors (eg move to point, avoid collisions)
  • Separation between motor systems and agent mind (allowing for the application of behaviors to different types of agents)
  • High-level scripting through finite state machines
  • Agents perform simultaneous goal attainment (they can think about multiple behaviors at once)
  • Agents respond to senses (vision, hearing, touch...) rather than global information