We prove a version of the derandomized Direct Product Lemma for deterministic space-bounded algorithms. Suppose a Boolean function g : {0,1} n → {0,1} cannot be computed on more than 1 - σ fraction of inputs by any deterministic time T and space 5 algorithm, where σ ≤ 1/t for some t. Then, for t-step walks w = (v 1,..., v t) in some explicit d-regular expander graph on 2 n vertices, the function g 1(w) = def g(v 1)... g(v t) cannot be computed on more than 1 - Ω(tσ) fraction of inputs by any deterministic time ≈ T/d t - poly(n) and space ≈ S - O(t). As an application, by iterating this construction, we get a deterministic linear-space "worst-case to constant averagecase" hardness amplification reduction, as well as a family of logspace encodable/decodable error-correcting codes that can correct up to a constant fraction of errors. Logspace encodable/decodable codes (with linear-time encoding and decoding) were previously constructed by Spielman [14]. Our codes have weaker parameters (encoding length is polynomial, rather than linear), but have a conceptually simpler construction. The proof of our Direct Product Lemma is inspired by Dinur's remarkable recent proof of the PCP theorem by gap amplification using expanders [4]. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Guruswami, V., & Kabanets, V. (2006). Hardness amplification via space-efficient direct products. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3887 LNCS, pp. 556–568). https://doi.org/10.1007/11682462_52
Mendeley helps you to discover research relevant for your work.