P_VGGnet: A convolutional neural network (CNN) with pixel-based attention map

Citations of this article
Mendeley users who have this article in their library.


Attention maps have been fused in the VggNet structure (EAC-Net) [1] and have shown significant improvement compared to that of the VggNet structure. However, in [1], E-Net was designed based on the facial action unit (AU) center and for facial AU detection only. Thus, for the use of attention maps in every image type, this paper proposed a new convolutional neural network (CNN) structure, P_VggNet, comprising the following parts: P_Net and VggNet with 16 layers (VggNet-16). The generation approach of P_Net was designed, and the P_VggNet structure was proposed. To prove the efficiency of P_VggNet, we designed two experiments, which indicated that P_VggNet could more efficiently extract image features than VggNet-16.




Liu, K., Zhong, P., Zheng, Y., Yang, K., & Liu, M. (2018). P_VGGnet: A convolutional neural network (CNN) with pixel-based attention map. PLoS ONE, 13(12). https://doi.org/10.1371/journal.pone.0208497

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free