Abstract
Affordance segmentation is used to split object images into parts according to the possible interactions, usually to drive safe robotic grasping. Most approaches to affordance segmentation are computationally demanding; this hinders their integration into wearable robots, whose compact structure typically offers limited processing power. This article describes a design strategy for tiny, deep neural networks (DNNs) that can accomplish affordance segmentation and deploy effectively on microcontroller-like processing units. This is attained by specialized, hardware-aware neural architecture search (HW-NAS). The method was validated by assessing the performance of several tiny networks, at different levels of complexity, on three benchmark datasets. The outcome measure was the accuracy of the generated affordance maps and the associated spatial object descriptors (orientation, center of mass, and size). The experimental results confirmed that the proposed method compared satisfactorily with the state-of-the-art approaches, yet allowing a considerable reduction in both network complexity and inference time. The proposed networks can, therefore, support the development of a teleceptive sensing system to improve the semiautomatic control of wearable robots for assisting grasping.
Author supplied keywords
Cite
CITATION STYLE
Ragusa, E., Dosen, S., Zunino, R., & Gastaldo, P. (2023). Affordance Segmentation Using Tiny Networks for Sensing Systems in Wearable Robotic Devices. IEEE Sensors Journal, 23(19), 23916–23926. https://doi.org/10.1109/JSEN.2023.3308615
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.