Learning Human Activity from Visual Data Using Deep Learning

10Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Advances in wearable technologies have the ability to revolutionize and improve people's lives. The gains go beyond the personal sphere, encompassing business and, by extension, the global economy. The technologies are incorporated in electronic devices that collect data from consumers' bodies and their immediate environment. Human activities recognition, which involves the use of various body sensors and modalities either separately or simultaneously, is one of the most important areas of wearable technology development. In real-life scenarios, the number of sensors deployed is dictated by practical and financial considerations. In the research for this article, we reviewed our earlier efforts and have accordingly reduced the number of required sensors, limiting ourselves to first-person vision data for activities recognition. Nonetheless, our results beat state of the art by more than 4% of F1 score.

Cite

CITATION STYLE

APA

Alhersh, T., Stuckenschmidt, H., Ur Rehman, A., & Belhaouari, S. B. (2021). Learning Human Activity from Visual Data Using Deep Learning. IEEE Access, 9, 106245–106253. https://doi.org/10.1109/ACCESS.2021.3099567

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free