Home research People General Info Seminars Resources Intranet
| Algorithms & Applcations Group | Home | Research | Publications | People | Resources | News
Algorithms & Applications Group
Pursuit and Evasion Techniques

Project Personnel:Samuel Rodriguez,Nancy Amato

We are interested in simulating complex interaction between groups of agents which interact with each other in complex and realistic environments. Pursuit and evasion is a commonly studied scenario where the pursuers seek to locate the evaders, and then either attempt to capture the evaders or drive them towards a certain location. The pursuit behavior may be applied to the catching of prey, apprehending a criminal, or herding the other agents to a desired goal location. The evaders attempt to avoid the pursuers, and if discovered, attempt to escape the pursuers by performing evasive actions in an attempt to make the pursuer give up the chase.

In our work, we combine our roadmap-based behaviors with an agent-based implementation to simulate a range of scenarios. We use a framework that allows us to implement various behaviors by extending a general behavior algorithm. By changing different parameters in each of the behaviors or in the agents, we can control how the agents interact with each other and tune the manner in which the simulations progresses. We are also able to simulate multiple groups of pursuers and evaders, which may only be interested in specific groups in the environment rather than all evaders or pursuers. Furthermore, we have the ability for a group of agents to pursue a second group of agents, while simultaneously attempting to avoid a third group in an attempt to achieve multiple objectives. We have been able to apply these behaviors in complex simulated scenarios and on iCreate Robots in simple environments utilizing an overhead camera.

Realistic Scenarios

Our versatile approach for agent-based pursuit-evasion games allows us to extend the range of applications that we are able to handle. We are able to extend the scenarios to 3D-scenarios that require more complex visibility checks that consider obstacles, surfaces, and agents in the environment. This 3D extension is critical to be able to handle the more realistic situations we are interested in studying which include crowds, terrains and multi-level scenarios.

Roadmap-Based Level Clearing of Buildings

We have developed a roadmap-based approach to a multi-agent search strategy for clearing a building or multi-story environment. This approach utilizes an encoding of the environment in the form of a roadmap and partitions the underlying graph. The roadmap is used to encode feasible paths through the environment however, we also use the roadmap in our search strategies in order to cover and clear the environment. We can provide certain guarantees within this roadmap-based framework on coverage and the number of agents needed. We are able to solve this problem in complex and realistic environments where many approaches are restricted to simple 2D environments.

The first environment consists of four levels. The first level consists of an open area with stairwells leading to the second level. The second level and each subsequent level is divided by four obstacles, creating five hallways off the main two hallways. The upper levels are connected by two stairways between the levels. The environment has dimensions of 230 units ×140 units. We tested our search algorithm with a number of agent view radius capabilities.

The second environment we performed experiments in is much more complex than the previous one. It consists of three levels, with many rooms on each level with walls obstructing view between rooms and hallways. There are three stairwells between the first and second floor and two between the second and third floor.

Roadmap-Based Pursuit-Evasion in 3D Structures

In our more recent work, we present an approach to the pursuit-evasion problem which is applicable to complex, multi-level environments. Studying each aspect of this problem in 3D structured environments is a distinct extension over many previous approaches. We also utilize our roadmap-based approach to multi-agent behavior when tracking agents of interest. Results are presented in three multi-level environments to highlight the search, pursuit and evasion components of the problem.

The first scenario shows 1 pursing agent searching and pursuing evading agents in a structured 3D environment. Our roadmap-based approach is general enough to be applicable to these complex environments.

The second scenario compares two types of searching behaviors (basic covering and frontier search). Also shown is pursuit with and without an agent tracking model.

Terrains

A terrain represents a realistic environment often seen outdoors which often consist of multiple hills and valleys. These environments provide numerous hiding locations for evading agents.

Crowds

A crowd scenario represents another realistic scenario that is not often looked at in the pursuit-evasion literature. The crowd is an example of multiple moving objects that block visibility between agents. Our general evasion strategies make use of crowd members when scoring potential hiding locations.

Multi-Level Environment

Supporting multi-level environments allows us to study a pursuit-evasion game in a building like environment. These are potentially complex environments that consist of multiple floors and stairwells. Our roadmap-based approach easily extends to this type of environment as paths in the roadmap act as a guide for the agent through the environment.

Tag Game

The game of tag played in this scenario is a very simple extension of the standard pursuit/evasion game. In this scenario, two sets of pursuing agents with the same capabilities start out with predefined pursuit behaviors. The evading agents in the environment have the goal of avoiding both sets of pursuing agents. Once an evading agent is captured, it becomes part of the capturing agent's team and begins executing the same pursuit behavior.

Robots

While our roadmap based approach is able to handle very complex environments, it can also handle more basic environments on actual robots. We have applied our simulated framework on a set of iCreate robots. Overhead camera data is used to obtain information about the environment which is sent to simulated sensors (i.e., robot placements and orientations). The simulated sensors process this information to give each agent knowledge about what is known to it in the environment. An agent uses this information in the behavioral process when determining the next actions to take.


Other Aspects to Pursuit-Evasion Behaviors

We have looked at a number of aspects to pursuit-evasion including simulating naturally inspired behaviors and attempting to have behaviors which achieve more complex objectives. Here we give an overview of this work.
Surround and Attack

The motivation for surrounding the target is to block paths by which the target could potentially escape. When performing this behavior, the agents must focus on their current pursuit. Once a target is located, the locating agent will form a subgroup of the surround and attack agents that will participate in the new pursuit. The size of the group is determined by two factors - the desired size of the pursuit subgroup, and the number of agents available to participate in a new pursuit.

The agents must then determine how they will surround their target. The agents will decide upon locations encircling the target, which will use an estimated reaction distance. This is the distance at which the pursuers believe that the target will begin taking action based upon the presence of the pursuers. Therefore, the surrounding locations will need to be further from the target than this reaction distance. The agents also determine which agent will proceed to each location. Once these factors are decided, the agents will attempt to find a path from their current locations to their specified surrounding positions. These paths will also make use of the estimated reaction distance of the target, and will attempt to create their paths such that they remain far enough away from the target to prevent a reaction.

There are two conditions that can cause the agents to begin chasing the target. The first is if the pursuing agents have all reached their surrounding locations. In this case, the preparations for the chase have been completed, and the pursuing agents should have the advantage of their positioning. The second condition is that the target moves to a location that is close to one of the surrounding agents. In this case, the agents will determine that there is a chance for an immediate capture, and will attempt to take advantage of the opportunity.

Behavior Combination for Pursuit and Evasion

The basic behaviors focus mainly on either pursuit or evasion. However, it may be that an agent wishes to accomplish more than simply pursuing or evading another group of agents. In this case, there must be some form of arbitration between the different objectives and behaviors that the behavior possesses.

Here, we have implemented two such methods of arbitrating between different behaviors. The first is a basic selection between the different behaviors. The agent will consider the observed state of the environment and its own internal state, then make a decision as to which of the available behaviors best suits the situation. In the second, the agent will have a main behavior that it will employ, and it will modify its planned path based on the objectives of the secondary behaviors.


Behavior Selection

The simplest method of handling multiple behaviors is to simply select one behavior to employ at a given time. The selection-base approach that we employ here decides which behavior should be active based upon the current state of the agent or group of agents.

The following animations provide a demonstration of the selection-based behavior combination. For this setup, we use two groups of agents. For the first, we blend our patrolling behavior with our surround-and-attack behavior. The second behavior uses our foraging behavior with the flee-and-hide behavior. In the various scenarios demonstrated, the groups of agents are given different view ranges. The priorities are set so that when an agent in the first group detects an agent in the second group, it will switch to the surround-and-attack behavior. Agents in the second group will switch to a hiding behaivor when they notice an agent in the first group, but will quickly resume foraging once they lose sight of the agent.
Same Capabilities Between Pursuit/Evading Agents--Capture
Same Capabilities Between Pursuit/Evading Agents--Escape



Achieving Objectives from Multiple Behaviors

Many of the behaviors we create find a path in the roadmap for an agent to follow. The path modification algorithm modifies the output paths from one behavior with respect to the constraints from other behaviors. In particular, this behavior blender combines the path obtained from one behavior (the main behavior) with constraints from other sub-behaviors to create an action for an agent that attempts to satisfy a number of its goals. The combination done will alter or deform the paths the agents are following.


Example scenario

Agents in group A search the environment for the hiding agents in group B. Once an agent in group A discovers an agent in group B it pursues in order to capture it. At the same time, agents in group A augment their behavior by hiding or being repelled from patrolling agents in group C. Agents in group B that are hiding augment their behavior to pursue or be attracted to agents in group C. The patrolling group C, described earlier, simulates a patrol behavior found in nature where a group of animals will send members on patrols to maintain their territory.
Movie1: Example of Three groups using path modification.
Movie2: Varying the forces for different behaviors.

Reciprocally-Rotating Velocity Obstacles, Andrew Giese, Daniel Latypov, Nancy M. Amato, In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pp. to appear, Hong Kong, China, Jun 2014. Also, Technical Report, TR13-009, Parasol Laboratory, Department of Computer Science, Texas A&M University, College Station, TX U.S.A., Oct 2013.
Proceedings(ps, pdf, abstract) Technical Report(ps, pdf, abstract)

Reciprocally-Rotating Velocity Obstacles, Andrew Giese, Masters Thesis, Department of Computer Science and Engineering, Texas A&M University, College Station, USA, May 2014.
Masters Thesis(ps, pdf, abstract)

Multi-Robot Caravanning, Jory Denny, Andrew Giese, Aditya Mahadevan, Arnaud Marfaing, Rachel Glockenmeier, Colton Revia, Samuel Rodriguez, Nancy M. Amato, In Proc. IEEE Int. Conf. Intel. Rob. Syst. (IROS), pp. 5722 - 5729, Tokyo, Japan, Nov 2013.
Proceedings(pdf, abstract)

Optimizing Aspects of Pedestrian Traffic in Building Designs, Samuel Rodriguez, Yinghua Zhang, Nicholas Gans, Nancy M. Amato, In Proc. IEEE Int. Conf. Intel. Rob. Syst. (IROS), Nov 2013.
Proceedings(pdf, abstract)

Improving Aggregate Behavior in Parking Lots with Appropriate Local Maneuvers, Samuel Rodriguez, Andrew Giese, Nancy M. Amato, In Proc. IEEE Int. Conf. Intel. Rob. Syst. (IROS), Nov 2013.
Proceedings(pdf, abstract)

Environmental Effect on Egress Simulation, Samuel Rodriguez, Andrew Giese, Nancy M. Amato, Saeid Zarrinmehr, Firas Al-Douri, Mark Clayton, In Proc. of the 5th Intern. Conf. on Motion in Games (MIG), 2012, in Lecture Notes in Computer Science (LNCS), pp. to appear, Rennes, Brittany, France, Nov 2012.
Proceedings(ps, pdf, abstract)

A Sampling-Based Approach to Probabilistic Pursuit Evasion, Aditya Mahadevan, Nancy M. Amato, In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pp. 3192 - 3199, St. Paul, Minnesota, USA, May 2012.
Proceedings(pdf, abstract)

Roadmap-Based Techniques for Modeling Group Behaviors in Multi-Agent Systems, Samuel Rodriguez, Ph.D. Thesis, Department of Computer Science and Engineering, Texas A&M University, Jan 2012.
Ph.D. Thesis(ps, pdf, abstract)

Roadmap-Based Level Clearing of Buildings, Samuel Rodriguez, Nancy M. Amato, In Proc. of the 4th Intern. Conf. on Motion in Games (MIG), 2011, in Lecture Notes in Computer Science (LNCS), pp. 340-352, Edinburgh, UK, Oct 2011.
Proceedings(ps, pdf, abstract)

Toward Realistic Pursuit-Evasion Using a Roadmap-Based Approach, Samuel Rodriguez, Jory Denny, Juan Burgos, Aditya Mahadevan, Kasra Manavi, Luke Murray, Anton Kodochygov, Takis Zourntos, Nancy M. Amato, In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pp. 1738-1745, May 2011.
Proceedings(ps, pdf, abstract)

Roadmap-Based Pursuit-Evasion in 3D Structures, Samuel Rodriguez, Jory Denny, Aditya Mahadevan, Jeremy (Cong-Trung) Vu, Juan Burgos, Takis Zourntos, Nancy M. Amato, In Proc. of 24th Intern. Conf. on Computer Animation and Social Agents (CASA), 2011, in Transactions on Edutainment, pp. to appear, May 2011.
Proceedings(ps, pdf, abstract)

Utilizing Roadmaps in Evacuation Planning, Samuel Rodriguez, Nancy M. Amato, In Proc. of 24th Intern. Conf. on Computer Animation and Social Agents (CASA), 2011, in Intern. J. of Virtual Reality (IJVR), pp. 67-73, May 2011.
Proceedings(ps, pdf, abstract)

Toward Simulating Realistic Pursuit-Evasion Using a Roadmap-Based Approach, Samuel Rodriguez, Jory Denny, Takis Zourntos, Nancy M. Amato, In Proc. of the 3rd Intern. Conf. on Motion in Games (MIG), 2010, in Lecture Notes in Computer Science (LNCS), pp. 82-93, Nov 2010.
Proceedings(ps, pdf, abstract)

Behavior-Based Evacuation Planning, Sam Rodriguez, Nancy M. Amato, In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pp. 350-355, Anchorage, AK, May 2010.
Proceedings(ps, pdf, abstract)

A Framework for Planning Motion in Environments with Moving Obstacles, Sam Rodriguez, Jyh-Ming Lien, Nancy M. Amato, In Proc. IEEE Int. Conf. Intel. Rob. Syst. (IROS), pp. 3309-3314, Oct 2007.
Proceedings(ps, pdf, abstract)

Swarming Behavior Using Probabilistic Roadmap Techniques, O. Burchan Bayazit, Jyh-Ming Lien, Nancy M. Amato, Lecture Notes in Computer Science, 3342/2005:112-125, Jan 2005.
Journal(ps, pdf, abstract)

Shepherding Behaviors, Jyh-Ming Lien, O. Burchan Bayazit, Ross T. Sowell, Samuel Rodriguez, Nancy M. Amato, In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pp. 4159-4164, New Orleans, Apr 2004. Also, Technical Report, TR03-006, Parasol Laboratory, Department of Computer Science, Texas A&M University, Nov 2003.
Proceedings(ps, pdf, abstract) Technical Report(ps, pdf, abstract)

Better Group Behaviors using Rule-Based Roadmaps, O. Burchan Bayazit, Jyh-Ming Lien, Nancy M. Amato, In Proc. Int. Wkshp. on Alg. Found. of Rob. (WAFR), pp. 95-111, Nice, France, Dec 2002.
Proceedings(ps, pdf, abstract)

Roadmap-Based Flocking for Complex Environments, O. Burchan Bayazit, Jyh-Ming Lien, Nancy M. Amato, In Proc. Pacific Conf. on Computer Graphics and App. (PG), pp. 104-113, Beijing, China, Oct 2002. Also, Technical Report, TR02-003, Parasol Laboratory, Department of Computer Science, Texas A&M University, Apr 2002.
Proceedings(ps, pdf, abstract) Technical Report(ps, pdf, abstract)