Two-Stream Deep Feature Modelling for Automated Video Endoscopy Data Analysis

8Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Automating the analysis of imagery of the Gastrointestinal (GI) tract captured during endoscopy procedures has substantial potential benefits for patients, as it can provide diagnostic support to medical practitioners and reduce mistakes via human error. To further the development of such methods, we propose a two-stream model for endoscopic image analysis. Our model fuses two streams of deep feature inputs by mapping their inherent relations through a novel relational network model, to better model symptoms and classify the image. In contrast to handcrafted feature-based models, our proposed network is able to learn features automatically and outperforms existing state-of-the-art methods on two public datasets: KVASIR and Nerthus. Our extensive evaluations illustrate the importance of having two streams of inputs instead of a single stream and also demonstrates the merits of the proposed relational network architecture to combine those streams.

Cite

CITATION STYLE

APA

Gammulle, H., Denman, S., Sridharan, S., & Fookes, C. (2020). Two-Stream Deep Feature Modelling for Automated Video Endoscopy Data Analysis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12263 LNCS, pp. 742–751). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-59716-0_71

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free