Human action recognition is one of the most pressing questions in societal emergencies of any kind. Technology is helping to solve such problems at the cost of stealing human privacy. Several approaches have considered the relevance of privacy in the pervasive process of observing people. New algorithms have been proposed to deal with low-resolution images hiding people identity. However, many of these methods do not consider that social security asks for real-time solutions: active cameras require flexible distributed systems in sensible areas as airports, hospitals, stations, squares and roads. To conjugate both human privacy and real-time supervision, we propose a novel deep architecture, the Multi Streams Network. This model works in real-time and performs action recognition on extremely low-resolution videos, exploiting three sources of information: RGB images, optical flow and slack mask data. Experiments on two datasets show that our architecture improves the recognition accuracy compared to the two-streams approach and ensure real-time execution on Edge TPU (Tensor Processing Unit).
CITATION STYLE
Russo, P., Ticca, S., Alati, E., & Pirri, F. (2021). Learning to See through a Few Pixels: Multi Streams Network for Extreme Low-Resolution Action Recognition. IEEE Access, 9, 12019–12026. https://doi.org/10.1109/ACCESS.2021.3050514
Mendeley helps you to discover research relevant for your work.