Multimodal Sensors and ML‐Based Data Fusion for Advanced Robots

  • Duan S
  • Shi Q
  • Wu J
N/ACitations
Citations of this article
28Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Nature has proved that multiple sense and processing capabilities are critical for target recognition and race survival. As such, advanced robots that perform missions in an unstructured environment highly need organism-similar multimodal sensing and processing systems for sophisticated unstructured environmental stimuli. Herein, recent progress in multimodal sensing and processing systems for advanced robotics is reviewed. Multimodal sensors including tactile sensors that capture surface properties of objects (i.e., thermal conductivity, temperature, softness, and electron affinity), visual sensors that capture the color and size of objects, and gas sensors that capture object smell are summarized. The multimodal data fusion algorithms that process multimodal signals from multimodal sensors to achieve object recognition and decision making are also presented. The challenges and future development of multimodal sensors and data fusion algorithms are further discussed. Advances in these areas open new avenues for advanced robotics applications in human-robot collaboration, rescue missions, garbage sorting, and intelligent prosthetics.

Cite

CITATION STYLE

APA

Duan, S., Shi, Q., & Wu, J. (2022). Multimodal Sensors and ML‐Based Data Fusion for Advanced Robots. Advanced Intelligent Systems, 4(12). https://doi.org/10.1002/aisy.202200213

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free