Accurate and Efficient LIF-Nets for 3D Detection and Recognition

9Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

3D object detection and recognition are crucial tasks for many spatiotemporal processing applications, such as computer-aided diagnosis and autonomous driving. Although prevalent 3D Convolution Nets (ConvNets) have continued to improve the accuracy and sensitivity, excessive computing resources are required. In this paper, we propose Leaky Integrate and Fire Networks (LIF-Nets) for 3D detection and recognition tasks. LIF-Nets have rich inter-frame sensing capability brought from membrane potentials, and low power event-driven mechanism, which make them excel in 3D processing and save computational cost at the same time. We also develop ResLIF Blocks to solve the degradation problem of deep LIF-Nets, and employ U-LIF structure to improve the feature representation capability. As a result, we carry out experiments on the LUng Nodule Analysis 2016 (LUNA16) public dataset for chest CT automated analysis and conclude that the LIF-Nets achieve 94.6% detection sensitivity at 8 False Positives per scan and 94.14% classification accuracy while the LIF-detection net reduces 65.45% multiplication operations, 65.12% addition operations, and 65.32% network parameters. The results show that LIF-Nets have extraordinary time-efficient and energy-saving performance while achieving comparable accuracy.

Cite

CITATION STYLE

APA

Shi, Y., Li, H., Zhang, H., Wu, Z., & Ren, S. (2020). Accurate and Efficient LIF-Nets for 3D Detection and Recognition. IEEE Access, 8, 98562–98571. https://doi.org/10.1109/ACCESS.2020.2995886

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free