Verifying deep-rl-driven systems

56Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep reinforcement learning (RL) has recently been successfully applied to networking contexts including routing, flow scheduling, congestion control, packet classification, cloud resource management, and video streaming. Deep-RL-driven systems automate decision making, and have been shown to outperform state-of-the-art handcrafted systems in important domains. However, the (typical) non-explainability of decisions induced by the deep learning machinery employed by these systems renders reasoning about crucial system properties, including correctness and security, extremely difficult. We show that despite the obscurity of decision making in these contexts, verifying that deep-RL-driven systems adhere to desired, designer-specified behavior, is achievable. To this end, we initiate the study of formal verification of deep RL and present Verily, a system for verifying deep-RL-based systems that leverages recent advances in verification of deep neural networks. We employ Verily to verify recently-introduced deep-RL-driven systems for adaptive video streaming, cloud resource management, and Internet congestion control. Our results expose scenarios in which deep-RL-driven decision making yields undesirable behavior. We discuss guidelines for building deep-RL-driven systems that are both safer and easier to verify.

Cite

CITATION STYLE

APA

Kazak, Y., Barrett, C., Katz, G., & Schapira, M. (2019). Verifying deep-rl-driven systems. In NetAI 2019 - Proceedings of the 2019 ACM SIGCOMM Workshop on Network Meets AI and ML, Part of SIGCOMM 2019 (pp. 83–89). Association for Computing Machinery, Inc. https://doi.org/10.1145/3341216.3342218

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free