Robots require demonstrations on the same robot, in the same room, with the same objects, which scales poorly.
Can robots learn general skill policies without specific reference to one robot, scene, or object from more diverse and larger sources of data?We introduce ZeroMimic, a system that distills robotic manipulation skills from egocentric human web videos for zero-shot deployment in diverse environments with a variety of objects. Critically, ZeroMimic learns purely from passive human videos and does not require any robot data.
Many recent advances in robotic manipulation have come through imitation learning, yet these rely largely on mimicking a particularly hard-to-acquire form of demonstrations: those collected on the same robot in the same room with the same objects as the trained policy must handle at test time. In contrast, large pre-recorded human video datasets demonstrating manipulation skills in-the-wild already exist, which contain valuable information for robots. Is it possible to distill a repository of useful robotic skill policies out of such data without any additional requirements on robot-specific demonstrations or exploration? We present the first such system ZeroMimic, that generates immediately deployable image goal-conditioned skill policies for several common categories of manipulation tasks (opening, closing, pouring, pick&place, cutting, and stirring) each capable of acting upon diverse objects and across diverse unseen task setups. ZeroMimic is carefully designed to exploit recent advances in semantic and geometric visual understanding of human videos, together with modern grasp affordance detectors and imitation policy classes. After training ZeroMimic on the popular EpicKitchens dataset of ego-centric human videos, we evaluate its out-of-the-box performance in varied kitchen settings, demonstrating its impressive abilities to handle these varied tasks. To enable plug-and-play reuse of ZeroMimic policies on other task setups and robots, we will release software and policy checkpoints for all skills.
We train 9 different skill policies: vertical hinge opening/closing, slide opening/closing, pouring, picking, placing, cutting, and stirring. We evaluate ZeroMimic skill policies out-of-the-box in the real world on a diverse set of objects in diverse unseen environments. See below for the evaluation videos.
To curate diverse and large-scale human behavior, we use EpicKitchens, an in-the-wild egocentric vision dataset. It contains 20M frames in 100 hours of daily activities in the kitchen. To extract wrist trajectories from EpicKitchens, we run HaMeR, a state-of-the-art pre-trained hand-tracking model, to obtain 3D hand pose reconstruction. We leverage the camera extrinsics and intrinsics reconstruction provided by EPIC-Fields to account for the motion of the egocentric camera and accurately convert human actions into the camera coordinate. See the paper for more details.
Grounded Wrist Pose
Visualized with sparse point cloud
Projected on images
We evaluate ZeroMimic policies on unseen web videos from the EpicKitchens dataset. As shown below, while the predicted trajectories do not exactly replicate the ground truth, they still offer plausible paths that accomplish the intended tasks effectively, thanks to training on a diverse set of multi-modal human demonstrations.
Our complete experimental procedure is shown in the following time-lapse video. Before our evaluations, we position the camera and the robot at a camera angle roughly similar to the relative camera angle of the human hand appearing in egocentric videos. We perform 10 trials with varying camera and robot positions to generally resemble a human's egocentric viewpoint.