Deep image compression in the wavelet transform domain based on high frequency sub-band prediction

10Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we propose to use deep neural networks for image compression in the wavelet transform domain. When the input image is transformed from the spatial pixel domain to the wavelet transform domain, one low-frequency sub-band (LF sub-band) and three high-frequency sub-bands (HF sub-bands) are generated. Low-frequency sub-band is firstly used to predict each high-frequency sub-band to eliminate redundancy between the sub-bands, after which the sub-bands are fed into different auto-encoders to do the encoding. In order to further improve the compression efficiency, we use a conditional probability model to estimate the context-dependent prior probability of the encoded codes, which can be used for entropy coding. The entire training process is unsupervised, and the auto-encoders and the conditional probability model are trained jointly. The experimental results show that the proposed approach outperforms JPEG, JPEG2000, BPG, and some mainstream neural network-based image compression. Furthermore, it produces better visual quality with clearer details and textures because more high-frequency coefficients can be reserved, thanks to the high-frequency prediction.

Cite

CITATION STYLE

APA

Yang, C., Zhao, Y., & Wang, S. (2019). Deep image compression in the wavelet transform domain based on high frequency sub-band prediction. IEEE Access, 7, 52484–52497. https://doi.org/10.1109/ACCESS.2019.2911403

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free