Soft continuum bodies have demonstrated their effectiveness in generating flexible and adaptive functionalities by capitalizing on the rich deformability of soft material. Compared with a rigid-body robot, it is in general difficult to model and emulate the morphology dynamics of a soft continuum body. In addition, a soft continuum body potentially has an infinite degree of freedom, requiring considerable labor to manually annotate its dynamics from external sensory data such as video. In this study, we propose a novel noninvasive framework for automatically extracting the skeletal dynamics from video of a soft continuum body and show the applications and effectiveness of our framework. First, we demonstrate that our framework can extract skeletal dynamics from animal videos, which can be effectively utilized for the analysis of soft continuum body including animal motion. Next, we focus on a soft continuum arm, a commonly used platform in soft robotics, and evaluate the potential information-processing capability. Normally, to control such a high-dimensional system, it is necessary to introduce many sensors to completely capture the motion dynamics, causing the deterioration of the material's softness. We illustrate that the evaluation of the memory capacity and sensory reconstruction error enables us to verify the minimum number of sensors sufficient for fully grasping the state dynamics, which is highly useful in designing a sensor arrangement for a soft robot. Also, we release the software developed in this study as open source for biology and soft robotics communities, which contributes to automating the annotation process required for the motion analysis of soft continuum bodies.
CITATION STYLE
Inoue, K., Kuniyoshi, Y., Kagaya, K., & Nakajima, K. (2022). Skeletonizing the Dynamics of Soft Continuum Body from Video. Soft Robotics, 9(2), 201–211. https://doi.org/10.1089/soro.2020.0110
Mendeley helps you to discover research relevant for your work.