Wearable System for Personalized and Privacy-preserving Egocentric Visual Context Detection using On-device Deep Learning

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Wearable egocentric visual context detection raises privacy concerns and is rarely personalized or on-device. We created a wearable system, called PAL, with on-device deep learning so that the user images do not have to be sent to the cloud for processing, and can be processed on-device in a real-time, offline, and privacy-preserving manner. PAL enables human-in-the-loop context labeling using wearable audio input/output and a mobile/web application. PAL uses on-device deep learning models for object and face detection, low-shot custom face recognition (∼1 training image per person), low-shot custom context recognition (e.g., brushing teeth, ∼10 training images per context), and custom context clustering for active learning. We tested PAL with 4 participants, 2 days each, and obtained ∼1000 in-the-wild images. The participants found PAL easy-to-use and each model had gt80% accuracy. Thus, PAL supports wearable, personalized, and privacy-preserving egocentric visual context detection using human-in-the-loop, low-shot, and on-device deep learning.

Cite

CITATION STYLE

APA

Khan, M., Fernandes, G., Vaish, A., Manuja, M., & Maes, P. (2021). Wearable System for Personalized and Privacy-preserving Egocentric Visual Context Detection using On-device Deep Learning. In UMAP 2021 - Adjunct Publication of the 29th ACM Conference on User Modeling, Adaptation and Personalization (pp. 35–40). Association for Computing Machinery, Inc. https://doi.org/10.1145/3450614.3461684

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free