Language-specified mobile manipulation tasks in novel environments simultaneously face challenges interacting with a scene which is only partially observed, grounding semantic information from language instructions to the partially observed scene, and actively updating knowledge of the scene with new observations. To address these challenges, we propose HELIOS, a hierarchical scene representation and associated search objective to perform language specified pick and place mobile manipulation tasks. We construct 2D maps containing the relevant semantic and occupancy information for navigation while simultaneously actively constructing 3D Gaussian representations of task-relevant objects. We fuse observations across this multi-layered representation while explicitly modeling the multi-view consistency of the detections of each object. In order to efficiently search for the target object, we formulate an objective function balancing exploration of unobserved or uncertain regions with exploitation of scene semantic information. We evaluate HELIOS on the OVMM benchmark in the Habitat simulator, a pick and place benchmark in which perception is challenging due to large and complex scenes with comparatively small target objects. HELIOS achieves state-of-the-art results on OVMM. As our approach is zero-shot, HELIOS can also transfer to the real world without requiring additional data, as we illustrate by demonstrating it in a real world office environment on a Spot robot.
Place skill failure examples due to not accounting for physical properties. Our place skill does not take into account the size or shape of the object, which can result in it being placed in an unstable orientation and rolling off the place location and/or being dropped too high and thus bouncing or toppling off the place location even when the orientation is fairly stable
Finding the target object failure example due to false negatives.
HELIOS can alleviate issues from false positive object detections however it cannot help with false negatives.
We can use a lower object detection threshold to reduce the number of false negatives however our method cannot
compensate for the case that the underlying object detection method never detects the object.