Milind Tambe, Ph.D
University of Southern California
Multiagent Systems: Putting Theory into Practice
Wednesday — March 24, 2010
Pacific Forum — 3:00 p.m.
How do we build multiagent systems? Today, within the agents and multiagent systems community, we see four main approaches: logic-based belief-desire-intention (BDI), decision-theory and its incarnation in distributed markov decision problems (distributed MDPs or POMDPs), distributed constraint optimization problems (DCOPs) and finally, auctions or same-theoretic approaches. In general, while there is exciting progress in this research, we still lack sufficient testing of our theories in complex multiagent domains to evaluate their promised strengths and uncover unanticipated limitations.
In this context, I will outline lessons learned in research efforts of the Teamcore group to transition theory into practice. I will focus on game theory research for randomizing plans for security applications and to avoid predictability that may be exploited by an opponent. Our algorithms are at the heart of ARMOR, a software scheduler that randomizes police checkpoints and canine patrols, deployed at the Los Angeles International Airport since August 2007. Our algorithms are also in use by the Federal Air Marshal and the Transportation Security Administration. Transitioning to DCOPs, I will focus on our recent effort in deploying them on a mobile sensor net. Surprisingly, these results show that increased teamwork can hurt agent performance, even when communication and computation costs are ignored. Finally, I will outline some recent research thrusts including multiagent-based evacuation simulation.
Next: March 31 - Zanna Chase