Explainable Arguments

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We introduce an intriguing new type of argument systems with the additional property of being explainable. Intuitively by explainable, we mean that given any argument under a statement, and any witness, we can produce the random coins for which the Prove algorithm outputs the same bits of the argument. This work aims at introducing the foundations for the interactive as well as the non-interactive setting. We show how to build explainable arguments from witness encryption and indistinguishability obfuscation. Finally, we show applications of explainable arguments. Notably we construct deniable chosen-ciphertext secure encryption. Previous deniable encryption scheme achieved only chosen plaintext security.

Cite

CITATION STYLE

APA

Hanzlik, L., & Kluczniak, K. (2022). Explainable Arguments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13411 LNCS, pp. 59–79). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-18283-9_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free