Protecting the Visual Fidelity of Machine Learning Datasets Using QR Codes

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Machine learning is becoming increasingly popular in a variety of modern technology. However, research has demonstrated that machine learning models are vulnerable to adversarial examples in their inputs. Potential attacks include poisoning datasets by perturbing input samples to mislead a machine learning model into producing undesirable results. Such perturbations are often subtle and imperceptible from a human’s perspective. This paper investigates two methods of verifying the visual fidelity of image based datasets by detecting perturbations made to the data using QR codes. In the first method, a verification string is stored for each image in a dataset. These verification strings can be used to determine whether an image in the dataset has been perturbed. In the second method, only a single verification string stored and is used to verify whether an entire dataset is intact.

Cite

CITATION STYLE

APA

Chow, Y. W., Susilo, W., Wang, J., Buckland, R., Baek, J., Kim, J., & Li, N. (2019). Protecting the Visual Fidelity of Machine Learning Datasets Using QR Codes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11806 LNCS, pp. 320–335). Springer Verlag. https://doi.org/10.1007/978-3-030-30619-9_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free