A note on (im)possibilities of obfuscating programs of zero-knowledge proofs of knowledge

0Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Program obfuscation seeks efficient methods to write programs in an incomprehensible way, while still preserving the functionalities of the programs. In this paper we continue this research w.r.t. zero-knowledge proofs of knowledge. Motivated by both theoretical and practical interests, we ask if the prover and verifier of a zero-knowledge proof of knowledge are obfuscatable. Our answer to this question is as follows. First we present two definitions of obfuscation for interactive probabilistic programs and then achieve the following results: 1 W.r.t. an average-case virtual black-box definition, we achieve some impossibilities of obfuscating provers of zero-knowledge and witness-indistinguishable proofs of knowledge. These results state that the honest prover with an instance and its witness hardwired of any zero-knowledge (or witness-indistinguishable) proof of knowledge of efficient prover's strategy is unobfuscatable if computing a witness (or a second witness) for this instance is hard. Moreover, we extend these results to t-composition setting and achieve similar results. These results imply that if an adversary obtains the prover's code (e.g. stealing a smartcard) he can indeed learn some knowledge from it beyond its functionality no matter what measures the card designer may use for resisting reverse-engineering. 2 W.r.t. a worst-case virtual black-box definition, we provide a possibility of obfuscating the honest verifier (with the public input hardwired) of Blum's 3-round zero-knowledge proof for Hamilton Cycle. Our investigation is motivated by an issue of privacy protection (e.g., if an adversary controls the verifier, he can obtain all provers' names and public inputs. Thus the provers' privacy may leak). We construct an obfuscator for the verifier, which implies that even if an adversary obtains the verifier's code, he cannot learn any knowledge, e.g. provers' names, from it. Thus we realize the anonymity of provers' accesses to the verifier and thus solve the issue of privacy protection. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Ding, N., & Gu, D. (2011). A note on (im)possibilities of obfuscating programs of zero-knowledge proofs of knowledge. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7092 LNCS, pp. 292–311). https://doi.org/10.1007/978-3-642-25513-7_20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free