OfGAN: Realistic Rendition of Synthetic Colonoscopy Videos

5Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Data-driven methods usually require a large amount of labelled data for training and generalization, especially in medical imaging. Targeting the colonoscopy field, we develop the Optical Flow Generative Adversarial Network (OfGAN) to transform simulated colonoscopy videos into realistic ones while preserving annotation. The advantages of our method are three-fold: the transformed videos are visually much more realistic; the annotation, such as optical flow of the source video is preserved in the transformed video, and it is robust to noise. The model uses a cycle-consistent structure and optical flow for both spatial and temporal consistency via adversarial training. We demonstrate that the performance of our OfGAN overwhelms the baseline method in relative tasks through both qualitative and quantitative evaluation.

Cite

CITATION STYLE

APA

Xu, J., Anwar, S., Barnes, N., Grimpen, F., Salvado, O., Anderson, S., & Armin, M. A. (2020). OfGAN: Realistic Rendition of Synthetic Colonoscopy Videos. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12263 LNCS, pp. 732–741). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59716-0_70

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free