Speaker will be Christine Evers from the Imperial College London.
Sound is used in nature to detect, identify and track salient events, to navigate, and to self-localise. The ability to make sense of acoustic signals is therefore a fundamental prerequisite for robots and autonomous systems. Audition – the ability to hear – excels particularly in scenarios where lighting conditions are poor, where salient events are outside of the line-of-sight, and in crowded environments where sensors, such as LIDAR and RADAR, are unreliable. However, due to the challenges affecting acoustic signals, robot audition has, to date, received only limited attention in the research community.
This talk focuses on acoustic scene mapping for robot audition in order to address the questions: “What is around me?” and “Where am I?”. The first part of the talk addresses the practical challenges of tracking moving sound sources when microphone arrays are integrated on moving platforms. The second part addresses the self-localization of the moving microphone array in the acoustic scene map when prior knowledge of the array position and orientation are unavailable. To conclude, extensions, including multi-modal sensor fusion, and future directions are discussed.