Audio-Visual Waypoints for Navigation

Changan Chen, Sagnik Majumder, Ziad Al-Halah, Ruohan Gao, Santhosh K. Ramakrishnan, Kristen Grauman. "Audio-Visual Waypoints for Navigation" In: arXiv. https://arxiv.org/abs/2008.09622

Abstract: In audio-visual navigation, an agent intelligently travels through a complex, un-mapped 3D environment using both sights and sounds to find a sound source (e.g.,a phone ringing in another room). Existing models learn to act at a fixed granularityof agent motion and rely on simple recurrent aggregations of the audio observations.We introduce a reinforcement learning approach to audio-visual navigation withtwo key novel elements 1) audio-visual waypoints that are dynamically set andlearned end-to-end within the navigation policy, and 2) an acoustic memory thatprovides a structured, spatially grounded record of what the agent has heard asit moves. Both new ideas capitalize on the synergy of audio and visual data forrevealing the geometry of an unmapped space. We demonstrate our approach on thechallenging Replica environments of real-world 3D scenes. Our model improvesthe state of the art by a substantial margin, and our experiments reveal that learningthe links between sights, sounds, and space is essential for audio-visual navigation.