For-all sparse recovery in near-optimal time

6Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

An approximate sparse recovery system in ℓ1 norm consists of parameters k, ε, N, an m-by-N measurement Φ, and a recovery algorithm, R. Given a vector, x, the system approximates x by x̂ = R(Φx), which must satisfy ∥ x̂-x∥1 ≤ (1 + ε)∥x - x k∥1. We consider the "for all" model, in which a single matrix Φ is used for all signals x. The best existing sublinear algorithm by Porat and Strauss (SODA'12) uses O(ε-3 klog(N/k)) measurements and runs in time O(k1-α Nα) for any constant α>0. In this paper, we improve the number of measurements to O(ε-2 k log(N/k)), matching the best existing upper bound (attained by super-linear algorithms), and the runtime to O(k1+β poly(logN,1/ε)), with a modest restriction that k ≤ N 1-α and ε ≤ (logk/logN)γ, for any constants α, β,γ > 0. With no restrictions on ε, we have an approximation recovery system with m = O(k/εlog(N/k)((logN/logk) γ +1/ε)) measurements. The algorithmic innovation is a novel encoding procedure that is reminiscent of network coding and that reflects the structure of the hashing stages. © 2014 Springer-Verlag.

Cite

CITATION STYLE

APA

Gilbert, A. C., Li, Y., Porat, E., & Strauss, M. J. (2014). For-all sparse recovery in near-optimal time. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8572 LNCS, pp. 538–550). Springer Verlag. https://doi.org/10.1007/978-3-662-43948-7_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free