Insider Brief:
- A paper outlining a new theory for error correction has shown that researchers can dramatically improve a quantum computer’s tolerance for faults, and reduce the amount of redundant information needed to isolate and fix errors.
- Leading the project is Jeff Thompson, an Associate Professor of Electrical and Computer Engineering at Princeton, who says the team can see this project as laying out a kind of architecture that could be applied in many different ways, and they already see a lot of interest in finding adaptations for this work.
- Thompson’s group is now working on demonstrating the conversion of errors to erasures in a small working quantum computer that combines several tens of qubits.
UNIVERSITY RESEARCH NEWS —Princeton, New Jersey/ Sept. 1, 2022 — In conventional computers, fixing errors is a well-developed field. Every cellphone requires checks and fixes to send and receive data over messy airwaves. Quantum computers offer enormous potential to solve certain complex problems that are impossible for conventional computers, but this power depends on harnessing extremely fleeting behaviors of subatomic particles. These computing behaviors are so ephemeral that even looking in on them to check for errors can cause the whole system to collapse.
In a paper outlining a new theory for error correction, published Aug. 9 in Nature Communications, an interdisciplinary team led by Jeff Thompson, an associate professor of electrical and computer engineering at Princeton, and collaborators Yue Wu and Shruti Puri at Yale University and Shimon Kolkowitz at the University of Wisconsin-Madison, showed that they could dramatically improve a quantum computer’s tolerance for faults, and reduce the amount of redundant information needed to isolate and fix errors. The new technique increases the acceptable error rate four-fold, from 1% to 4%, which is practical for quantum computers currently in development.
“The fundamental challenge to quantum computers is that the operations you want to do are noisy,” said Thompson, meaning that calculations are prone to myriad modes of failure.
In a conventional computer, an error can be as simple as a bit of memory accidentally flipping from a 1 to a 0, or as messy as one wireless router interfering with another. A common approach for handling such faults is to build in some redundancy, so that each piece of data is compared with duplicate copies. However, that approach increases the amount of data needed and creates more possibilities for errors. Therefore, it only works when the vast majority of information is already correct. Otherwise, checking wrong data against wrong data leads deeper into a pit of error.
“If your baseline error rate is too high, redundancy is a bad strategy,” Thompson said. “Getting below that threshold is the main challenge.”
Rather than focusing solely on reducing the number of errors, Thompson’s team essentially made errors more visible. The team delved deeply into the actual physical causes of error, and engineered their system so that the most common source of error effectively eliminates, rather than simply corrupting, the damaged data. Thompson said this behavior represents a particular kind of error known as an “erasure error,” which is fundamentally easier to weed out than data that is corrupted but still looks like all the other data.
In a conventional computer, if a packet of supposedly redundant information comes across as 11001, it might be risky to assume that the slightly more prevalent 1s are correct and the 0s are wrong. But if the information comes across as 11XX1, where the corrupted bits are evident, the case is more compelling.
“These erasure errors are vastly easier to correct because you know where they are,” Thompson said. “They can be excluded from the majority vote. That is a huge advantage.”
Erasure errors are well understood in conventional computing, but researchers had not previously considered trying to engineer quantum computers to convert errors into erasures, Thompson said.
As a practical matter, their proposed system could withstand an error rate of 4.1%, which Thompson said is well within the realm of possibility for current quantum computers. In previous systems, the state-of-the-art error correction could handle less than 1% error, which Thompson said is at the edge of the capability of any current quantum system with a large number of qubits.
The team’s ability to generate erasure errors turned out to be an unexpected benefit from a choice Thompson made years ago. His research explores “neutral atom qubits,” in which quantum information (a “qubit”) is stored in a single atom. They pioneered the use of the element ytterbium for this purpose. Thompson said the group chose ytterbium partly because it has two electrons in its outermost layer of electrons, compared to most other neutral atom qubits, which have just one.
“I think of it as a Swiss army knife, and this ytterbium is the bigger, fatter Swiss army knife,” Thompson said. “That extra little bit of complexity you get from having two electrons gives you a lot of unique tools.”
One use of those extra tools turned out to be useful for eliminating errors. The team proposed pumping the electrons in ytterbium and from their stable “ground state” to excited states called “metastable states,” which can be long-lived under the right conditions but are inherently fragile. Counterintuitively, the researchers propose to use these states to encode the quantum information.
“It’s like the electrons are on a tightrope,” Thompson said. And the system is engineered so that the same factors that cause error also cause the electrons to fall off the tightrope.
As a bonus, once they fall to the ground state, the electrons scatter light in a very visible way, so shining a light on a collection of ytterbium qubits causes only the faulty ones to light up. Those that light up should be written off as errors.
This advance required combining insights in both quantum computing hardware and the theory of quantum error correction, leveraging the interdisciplinary nature of the research team and their close collaboration. While the mechanics of this setup are specific to Thompson’s ytterbium atoms, he said the idea of engineering quantum qubits to generate erasure errors could be a useful goal in other systems — of which there are many in development around the world — and is something that the group is continuing to work on.
“We see this project as laying out a kind of architecture that could be applied in many different ways,” Thompson said, adding that other groups have already begun engineering their systems to convert errors into erasures. “We are already seeing a lot of interesting in finding adaptations for this work.”
As a next step, Thompson’s group is now working on demonstrating the conversion of errors to erasures in a small working quantum computer that combines several tens of qubits.
*Image: Overview of a fault-tolerant neutral atom quantum computer using erasure conversion: (a) Schematic of a neutral atom quantum computer, with a plane of atoms under a microscope objective used to image fluorescence and project trapping and control fields. (b) The physical qubits are individual 171Yb atoms. The qubit states are encoded in the metastable 6s6p 3P0F = 1/2 level (subspace Q), and two-qubit gates are performed via the Rydberg state |r⟩|r⟩\left|r\right\rangle, which is accessed through a single-photon transition (λ = 302 nm) with Rabi frequency Ω. The dominant errors during gates are decays from |r⟩|r⟩\left|r\right\rangle with a total rate Γ = ΓB + ΓR + ΓQ. Only a small fraction ΓQ/Γ ≈ 0.05 return to the qubit subspace, while the remaining decays are either blackbody (BBR) transitions to nearby Rydberg states (ΓB/Γ ≈ 0.61) or radiative decay to the ground state 6s2 1S0 (ΓR/Γ ≈ 0.34). At the end of a gate, these events can be detected and converted into erasure errors by detecting fluorescence from ground state atoms (subspace R), or ionizing any remaining Rydberg population via autoionization, and collecting fluorescence on the Yb+ transition (subspace B). © A patch of the XZZX surface code studied in this work, showing data qubits (open circles), ancilla qubits (filled circles) and stabilizer operations, performed in the order indicated by the arrows. (d) Quantum circuit representing a measurement of a stabilizer on data qubits D1 − D4 using ancilla A1 with interleaved erasure conversion steps. Erasure detection is applied after each gate, and erased atoms are replaced from a reservoir as needed using a moveable optical tweezer. It is strictly only necessary to replace the atom that was detected to have left the subspace, but replacing both protects against the possibility of undetected leakage on the second atom.
The paper, “Erasure conversion for fault-tolerant quantum computing in alkaline earth Rydberg atom arrays,” was published Aug. 9 in Nature Communications. The work was supported by the National Science Foundation QLCI Center for Robust Quantum Simulation, as well as grants the Army Research Office, the Office of Naval Research, the Defense Advanced Projects Research Administration and the Sloan Foundation.
SOURCE: Steven Schultz, Princeton University
For more market insights, check out our latest quantum computing news here.