Feature Aggregation Decoder for Segmenting Laparoscopic Scenes

3Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Laparoscopic scene segmentation is one of the key building blocks required for developing advanced computer assisted interventions and robotic automation. Scene segmentation approaches often rely on encoder-decoder architectures that encode a representation of the input to be decoded to semantic pixel labels. In this paper, we propose to use the deep Xception model for the encoder and a simple yet effective decoder that relies on a feature aggregation module. Our feature aggregation module constructs a mapping function that reuses and transfers encoder features and combines information across all feature scales to build a richer representation that keeps both high-level context and low-level boundary information. We argue that this aggregation module enables us to simplify the decoder and reduce the number of parameters in the decoder. We have evaluated our approach on two datasets and our experimental results show that our model outperforms state-of-the-art models on the same experimental setup and significantly improves the previous results, on the EndoVis’15 dataset.

Cite

CITATION STYLE

APA

Kadkhodamohammadi, A., Luengo, I., Barbarisi, S., Taleb, H., Flouty, E., & Stoyanov, D. (2019). Feature Aggregation Decoder for Segmenting Laparoscopic Scenes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11796 LNCS, pp. 3–11). Springer. https://doi.org/10.1007/978-3-030-32695-1_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free