Reinforcement learning and trustworthy autonomy

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Cyber-Physical Systems (CPS) possess physical and software interdependence and are typically designed by teams of mechanical, electrical, and software engineers. The interdisciplinary nature of CPS makes them difficult to design with safety guarantees. When autonomy is incorporated, design complexity and, especially, the difficulty of providing safety assurances are increased. Visionbased reinforcement learning is an increasingly popular family of machine learning algorithms that may be used to provide autonomy for CPS. Understanding how visual stimuli trigger various actions is critical for trustworthy autonomy. In this chapter we introduce reinforcement learning in the context of Microsoft's AirSim drone simulator. Specifically, we guide the reader through the necessary steps for creating a drone simulation environment suitable for experimenting with visionbased reinforcement learning. We also explore how existing vision-oriented deep learning analysis methods may be applied toward safety verification in vision-based reinforcement learning applications.

Cite

CITATION STYLE

APA

Luo, J., Green, S., Feghali, P., Legrady, G., & Koç, Ç. K. (2018). Reinforcement learning and trustworthy autonomy. In Cyber-Physical Systems Security (pp. 191–217). Springer International Publishing. https://doi.org/10.1007/978-3-319-98935-8_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free