Abstract
Drone audition, or auditory processing for drones equipped with a microphone array, is expected to compensate for problems affecting drones' visual processing, in particular occlusion and poor-illumination conditions. The current state of drone audition still assumes a single sound source. When a drone hears sounds originating from multiple sound sources, its sound-source localization function determines their directions. If two sources are very close to each other, the localization function cannot determine whether they are crossing or approaching-then-departing. This ambiguity in tracking multiple sound sources is resolved by data association. Typical methods of data association use each label of the separated sounds, but are prone to errors due to identification failures. Instead of labeling by classification, this study uses a set of classification measures determined by support vector machines (SVM) to avoid labeling failures and deal with unknown signals. The effectiveness of the proposed approach is validated through simulations and experiments conducted in the field.
Author supplied keywords
Cite
CITATION STYLE
Wakabayashi, M., Okuno, H. G., & Kumon, M. (2020). Multiple Sound Source Position Estimation by Drone Audition Based on Data Association between Sound Source Localization and Identification. IEEE Robotics and Automation Letters, 5(2), 782–789. https://doi.org/10.1109/LRA.2020.2965417
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.