Research


Our group explores the foundations of human-robot interaction. There already exist many high-performance robots: our goal is to enable these robots to seamlessly and intelligently interact with human partners. We seek out problems inspired by the challenges of everyday users — particularly for assistive robot arms and self-driving cars — as well as problems motivated by future applications — such as personal robots or robots in the home. Research projects within Collab typically involve three steps:

  1. Formalizing the problem within a larger mathematical context
  2. Creating a learning or control based solution
  3. Testing our approach with real people and robots
We believe that effective research balances theory and practice. We strive to fundamentally understand how humans and robot should interact, while also conducting user studies to determine what people really need.

Current Areas of Interest

As of Summer 2021, Collab is focused on two broad problems in human-robot interaction: inclusivity and explainability.

Inclusivity


When people interact with robots, often they want to teach their robot a new skill or control their robot to perform a different task. The field of robotics has made significant strides towards both of these goals. However, today's methods require a level of proficiency, dexterity, or expertise that many people lack. For example, in the video above a person with physical disabilities is remotely controlling a robot in our lab: what are some of the challenges he faces as he tries to assemble a dessert? We are excited about creating learning and control strategies that are intentionally and fundamentally inclusive. We envision robots that can quickly learn what their human partner really meant, even if that person's limitations prevent them from showing their desired behavior.

Explainability


Learning is a key component of robotics. In order for robots to solve everyday tasks, they must be able to constantly learn from their environment. But this learning is a black box to everyday human users: when a person shows a robot how to make a cup of coffee, how do they know what parts of the task the robot has learned and what it is still confused about? Ensuring that users understand robot learning is necessary to build trust and collaboration. Accordingly, we are working towards explainable robot learners that actively bring nearby users into their learning loop. We envision solutions that bridge both physical and algorithmic intelligence: for instance, in the video above the user is alerted by a haptic wristband that the robot learner needs additional guidance.