L-DETR: A Light-Weight Detector for End-to-End Object Detection with Transformers

13Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Now most high-performance models are deployed to the cloud, which will not only affect the real-time performance of the model, but also restrict the wide use of the model. How to designing a lightweight detector that can be deployed off-line on non-cloud devices is a promising way to get high-performance for artificial intelligence applications. Hence, this paper proposes a lightweight detector based on PP-LCNet and improved transformer named L-DETR. We redesign the structure of PP-LCNet and use it as the backbone in L-DETR for feature extraction. More-over, we adopt the group normalization in the encoder-and-decoder module and H-sigmoid activation function in the multi-layer perceptron to improve the accuracy of the transformer in L-DETR. The quantity of parameters of our proposed model is 26 percent and 46 percent of the original DETR with backbones of resnet50 and resnet18. The experimental results on several datasets show that this method is superior to the DETR model in object recognition, and the convergence speed is fast. Experi-mental results on multiple data sets have shown that our proposal has a higher performance than DETR models on object recognition and bounding box detection. The code is available at https://github.com/wangjian123799/L-DETR.git.

Cite

CITATION STYLE

APA

Li, T., Wang, J., & Zhang, T. (2022). L-DETR: A Light-Weight Detector for End-to-End Object Detection with Transformers. IEEE Access, 10, 105685–105692. https://doi.org/10.1109/ACCESS.2022.3208889

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free