Factor selection for reinforcement learning in HTTP adaptive streaming

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

At present, HTTP Adaptive Streaming (HAS) is developing into a key technology for video delivery over the Internet. In this delivery strategy, the client proactively and adaptively requests a quality version of chunked video segments based on its playback buffer, the perceived network bandwidth and other relevant factors. In this paper, we discuss the use of reinforcement-learning (RL) to learn the optimal request strategy at the HAS client by progressively maximizing a pre-defined Quality of Experience (QoE)-related reward function. Under the framework of RL, we investigate the most influential factors for the request strategy, using a forward variable selection algorithm. The performance of the RL-based HAS client is evaluated by a Video-on-Demand (VOD) simulation system. Results show that given the QoE-related reward function, the RL-based HAS client is able to optimize the quantitative QoE. Comparing with a conventional HAS system, the RL-based HAS client is more robust and flexible under versatile network conditions. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Wu, T., & Van Leekwijck, W. (2014). Factor selection for reinforcement learning in HTTP adaptive streaming. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8325 LNCS, pp. 553–567). https://doi.org/10.1007/978-3-319-04114-8_47

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free