Continuous Perception for Classifying Shapes and Weights of Garments for Robotic Vision Applications

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present an approach to continuous perception for robotic laundry tasks. Our assumption is that the visual prediction of a garment’s shapes and weights is possible via a neural network that learns the dynamic changes of garments from video sequences. Continuous perception is leveraged during training by inputting consecutive frames, of which the network learns how a garment deforms. To evaluate our hypothesis, we captured a dataset of 40K RGB and depth video sequences while a garment is being manipulated. We also conducted ablation studies to understand whether the neural network learns the physical properties of garments. Our findings suggest that a modified AlexNet-LSTM architecture has the best classification performance for the garment’s shapes and discretised weights. To further provide evidence for continuous perception, we evaluated our network on unseen video sequences and computed the ’Moving Average’ over a sequence of predictions. We found that our network has a classification accuracy of 48% and 60% for shapes and weights of garments, respectively.

Cite

CITATION STYLE

APA

Duan, L., & Aragon-Camarasa, G. (2022). Continuous Perception for Classifying Shapes and Weights of Garments for Robotic Vision Applications. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Vol. 4, pp. 348–355). Science and Technology Publications, Lda. https://doi.org/10.5220/0010804300003124

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free