Bell nonlocality using tensor networks and sparse recovery

5Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Bell's theorem, stating that quantum predictions are incompatible with a local hidden variable description, is a cornerstone of quantum theory and at the center of many quantum information processing protocols. Over the years, different perspectives on nonlocality have been put forward as well as different ways to detect nonlocality and quantify it. Unfortunately, and in spite of its relevance, as the complexity of the Bell scenario increases, deciding whether a given observed correlation is nonlocal becomes computationally intractable. Here, we propose to analyze a Bell scenario as a tensor network, a perspective permitting us to test and quantify nonlocality, resorting to very efficient algorithms originating from compressed sensing and that offer a significant speedup in comparison with standard linear programming methods. We use that all nonsignaling correlations can be described by hidden variable models governed by a quasiprobability, a fact we prove with simple linear algebra methods.

Cite

CITATION STYLE

APA

Eliëns, I. S., Brito, S. G. A., & Chaves, R. (2020). Bell nonlocality using tensor networks and sparse recovery. Physical Review Research, 2(2). https://doi.org/10.1103/PhysRevResearch.2.023198

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free