Learning to Influence Multi-Agent Interaction

It's hard for robots to seamlessly interact when humans change their behavior. We explore how robots can co-adapt alongside other agents by capturing their behavior as a high-level strategy, and then influencing that strategy over repeated interactions.

When Humans Aren’t Optimal: Robots that Collaborate with Risk-Aware Humans

To create human-like robots, we need to understand how humans behave. We present a modeling approach which enables robots to anticipate that humans can make suboptimal choices when risk and uncertainty are involved.

Controlling Assistive Robots with Learned Latent Actions

We want to make it easier for humans to teleoperate dexterous robots. We present a learning approach that embeds high-dimensional robot actions into an intuitive, human-controllable, and low-dimensional latent space.

Learning from My Partner’s Actions: Roles in Decentralized Robot Teams

When groups robots work together, their actions communicate valuable information. We introduce a collaborative learning and control strategy that enables robots to harness the information contained within their partner's actions.

Learning Robot Objectives from Physical Human Interaction

Physical interactions are often intentional, purposeful corrections. We present an optimal approach for learning from these corrections, so that robots understand what we really want them to do.