VIEW: Visual Imitation Learning with Waypoints

Ananth Jonnavittula      Sagar Parekh      Dylan Losey
Collab, Virginia Tech Under Review

Robots Learning from a single video demonstration from humans.

Abstract

Robots can use Visual Imitation Learning (VIL) to learn everyday tasks from video demonstrations. However, translating visual observations into actionable robot policies is challenging due to the high-dimensional nature of video data. This challenge is further exacerbated by the morphological differences between humans and robots, especially when the video demonstrations feature humans performing tasks. To address these problems we introduce Visual Imitation lEarning with Waypoints (VIEW), an algorithm that significantly enhances the sample efficiency of human-to-robot VIL. VIEW achieves this efficiency using a multi-pronged approach: extracting a condensed prior trajectory that captures the demonstrator's intent, employing an agent-agnostic reward function for feedback on the robot's actions, and utilizing an exploration algorithm that efficiently samples around waypoints in the extracted trajectory. VIEW also segments the human trajectory into grasp and task phases to further accelerate learning efficiency. Through comprehensive simulations and real-world experiments, VIEW demonstrates improved performance compared to current state-of-the-art VIL methods. VIEW enables robots to learn a diverse range of manipulation tasks involving multiple objects from arbitrarily long video demonstrations. Additionally, it can learn standard manipulation tasks such as pushing or moving objects from a single video demonstration in under 30 minutes, with fewer than 20 real-world rollouts.

How Does it Work?


Overview of our method VIEW.

How Do We Extract Human Priors?


Explanation coming soon.

BibTeX

@article{jonnavittula2024view,
      title={VIEW: Visual Imitation Learning with Waypoints}, 
      author={Ananth Jonnavittula and Sagar Parekh and Dylan P. Losey},
      journal={arXiv preprint arXiv:2404.17906},
      year={2024}
}

Acknowledgements

We thank Heramb Nemlekar for his feedback on our manuscript. This work was supported by the USDA National Institute of Food and Agriculture, Grant 2022-67021-37868.