Quantum machine learning (QML) has received increasing attention due to its potential to outperform classical machine learning methods in problems, such as classification and identification tasks. A subclass of QML methods is quantum generative adversarial networks (QGANs), which have been studied as a quantum counterpart of classical GANs widely used in image manipulation and generation tasks. The existing work on QGANs is still limited to small-scale proof-of-concept examples based on images with significant downscaling. Here, we integrate classical and quantum techniques to propose a new hybrid quantum-classical GAN framework. We demonstrate its superior learning capabilities over existing quantum techniques by generating 28 × 28 pixels grayscale images without dimensionality reduction or classical pre/postprocessing on multiple classes of the standard Modified National Institute of Standards and Technology (MNIST) and Fashion MNIST datasets, which achieves comparable results to classical frameworks with three orders of magnitude less trainable generator parameters. To gain further insight into the working of our hybrid approach, we systematically explore the impact of its parameter space by varying the number of qubits, the size of image patches, the number of layers in the generator, the shape of the patches, and the choice of prior distribution. Our results show that increasing the quantum generator size generally improves the learning capability of the network. The developed framework provides a foundation for future design of QGANs with optimal parameter set tailored for complex image generation tasks.
CITATION STYLE
Tsang, S. L., West, M. T., Erfani, S. M., & Usman, M. (2023). Hybrid Quantum-Classical Generative Adversarial Network for High-Resolution Image Generation. IEEE Transactions on Quantum Engineering, 4. https://doi.org/10.1109/TQE.2023.3319319
Mendeley helps you to discover research relevant for your work.