On the Robustness of learning parity with noise

1Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The Learning Parity with Noise (LPN) problem is well understood in learning theory and cryptography and has been found quite useful in constructing various lightweight cryptographic primitives. There exists non-trivial evidence that the problem is robust on highentropy secrets (and even given hard-to-invert leakages), and the justified results by Dodis, Kalai and Lovett (STOC 2009) were established under non-standard hard learning assumptions. The recent progress by Suttichaya and Bhattarakosol (Information Processing Letters, Volume 113, Issues 14–16) claimed that LPN remains provably secure (reducible from the LPN assumption itself) as long as the secret is sampled from any linear min-entropy source, and thereby resolves the long-standing open problem. In the paper, we point out that their proof is flawed and their understanding about LPN is erroneous. We further offer a remedy with some slight adaption to the setting of Suttichaya and Bhattarakosol.

Cite

CITATION STYLE

APA

Yao, N., Yu, Y., Li, X., & Gu, D. (2016). On the Robustness of learning parity with noise. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9977 LNCS, pp. 99–106). Springer Verlag. https://doi.org/10.1007/978-3-319-50011-9_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free