Modeling User Performance for Moving Target

  • Claypool M
  • Eg R
  • Raaen K
N/ACitations
Citations of this article
60Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Lossy image and video compression algorithms yield visually annoying artifacts including blocking, blurring, and ringing, especially at low bit-rates. To reduce these artifacts, post-processing techniques have been extensively studied. Recently, inspired by the great success of convo-lutional neural network (CNN) in computer vision, some researches were performed on adopting CNN in post-processing, mostly for JPEG com-pressed images. In this paper, we present a CNN-based post-processing algorithm for High Efficiency Video Coding (HEVC), the state-of-the-art video coding standard. We redesign a Variable-filter-size Residue-learning CNN (VRCNN) to improve the performance and to accelerate network training. Experimental results show that using our VRCNN as post-processing leads to on average 4.6% bit-rate reduction compared to HEVC baseline. The VRCNN outperforms previously studied networks in achieving higher bit-rate reduction, lower memory cost, and multiplied computational speedup.

Cite

CITATION STYLE

APA

Claypool, M., Eg, R., & Raaen, K. (2017). Modeling User Performance for Moving Target. MultiMedia Modeling, 1, 226–237. https://doi.org/10.1007/978-3-319-51811-4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free