On the hardness of sparsely learning parity with noise

0Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Learning Parity with Noise (LPN) represents the average-case analogue of the NP-Complete problem “decoding random linear codes”, and it has been extensively studied in learning theory and cryptography with applications to quantum-resistant cryptographic schemes. In this paper, we study a sparse variant of the LPN whose public matrix consists of sparse vectors (or alternatively each element of the matrix follows the Bernoulli distribution), of which the variant considered by Benny, Boaz and Avi (STOC 2010) falls into a (extreme) special case. We show a win-win argument that at least one of the following is true: (1) either the hardness of sparse LPN is implied by that of the standard LPN under the same noise rate; (2) there exist new black-box constructions of public-key encryption (PKE) schemes and oblivious transfer (OT) protocols from the standard LPN.

Cite

CITATION STYLE

APA

Liu, H., Yan, D., Yu, Y., & Zhao, S. (2017). On the hardness of sparsely learning parity with noise. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10592 LNCS, pp. 261–267). Springer Verlag. https://doi.org/10.1007/978-3-319-68637-0_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free