To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important, for example the overhead. The overhead quantifies the total number of physical qubits required to perform error correction. In this work we analyze the overhead of the toric code using a perfect matching decoding algorithm and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behavior in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and report an overhead in the range of $10^1-10^4$ physical qubits for a selection of realistic operating parameters.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below