Insider Brief:
- The study by Q-CTRL demonstrates that applying QEC primitives without logical encoding can reduce errors in quantum systems while requiring lower overhead as compared to traditional methods.
- Researchers generated a 75-qubit Greenberger-Horne-Zeilinger state, the largest reported to date, using just nine ancilla qubits.
- The new protocol achieved over 85% gate fidelity across 40 lattice sites without additional ancilla qubits and shows reduced discard rates compared to alternative approaches.
- By combining QEC primitives with error suppression, the study presents a hybrid strategy that balances error reduction with practical resource demands, potentially making it more practical for current-generation quantum processors.
Quantum systems are inherently delicate, with qubits susceptible to errors caused by environmental noise, imperfect control operations, and less-than-ideal hardware conditions. To add insult to injury, these errors can accumulate over time and effectively threaten the reliability of quantum computations. Quantum error correction is often presented as one prominent key to unlocking the full potential of quantum computing. Yet, its implementation on current quantum processors is hindered by substantial overhead requirements, with estimates reaching up to 1000 physical qubits per logical qubit. This limitation has generated interest in developing alternative approaches, such as error mitigation and hardware advancements, to achieve practical error reduction. A recent preprint published on arXiv and led by Q-CTRL explores how QEC primitives—key components of error correction protocols—can be strategically applied without logical encoding to achieve the benefits of error correction while requiring less of the overhead.
The Roadblocks to Logical Encoding
Logical encoding, central to traditional QEC, is designed with the goal of detecting and correcting errors during computation. However, the overhead required for these implementations is restrictive for near-term devices. According to the study, post-selected error-detection experiments on a few dozen logical qubits can discard over 99.9% of results, reflecting the high resource demand. This overhead adds another major barrier to users with limited access to quantum hardware due to cost or time constraints. Consequently, researchers have turned to less resource-intensive techniques, such as gate-level error suppression, noise-resilient algorithms, and quantum error mitigation, to address the challenge.
Unfortunately, these alternative methods, while effective at reducing certain error types, are not able to mitigate all forms of computational noise, with Markovian errors such as bit-flips and dephasing noted in the study. This gap brings to light the need for solutions that can balance error reduction with practical resource demands.
Experimental Advances in Error Detection
The Q-CTRL team developed a new approach to error reduction that combines sparse stabilizer measurements with deterministic error suppression to produce large Greenberger-Horne-Zeilinger states, maximally entangled quantum states essential for tasks including error correction. Using only nine ancilla qubits, they successfully generated a 75-qubit GHZ state exhibiting genuine multipartite entanglement, the largest of its kind reported to date, as noted in the study.
As entanglement is an essential resource for tasks such as error correction and secure communication, this result is highly relevant for applications in quantum computing. Genuine multipartite entanglement, verified through metrics such as multiple-quantum coherence, signifies a level of control over quantum systems previously thought unattainable on near-term devices.
Additionally, their protocol achieved improved fidelity compared to alternative measurement-based techniques, maintaining over 85% gate fidelity across 40 lattice sites without introducing additional ancilla qubits. The reduced discard rate—approximately 78% for the largest GHZ state—is a notable improvement over traditional methods, further validating the efficiency of this approach.
A Comparative Perspective on Error-Reduction Strategies
The study situates its findings within the broader landscape of error-reduction strategies, providing insights into how QEC primitives differ from and complement existing methods. Quantum error mitigation, for instance, relies on post-processing to estimate noise-free results but does not correct errors during computation. Hardware-level improvements, such as reducing gate-error rates, have advanced device performance but remain constrained by current technological limits.
In contrast, the integration of QEC primitives on unencoded qubits delivers real-time error detection, leading to improved computational capability without full logical encoding. This hybrid strategy combines the strengths of error suppression while mitigating its limitations, potentially bringing more reliable computing to present hardware capabilities as we work towards the long-term vision of fault-tolerant quantum computing.
Balancing Benefits and Limitations
While the results are promising, they of course come tempered with limitations. The protocols require tailored circuit designs and remain sensitive to certain error types, such as stochastic bit-flips. Additionally, the linear scaling of circuit depth with teleportation path length may limit the applicability of these methods to larger systems.
However, this does not diminish the fact that by demonstrating that computational improvements are achievable with modest overhead, the research provides a starting point for potentially using current-generation quantum processors effectively. QEC primitives, applied strategically, may be able to enhance the performance of superconducting quantum processors without the prohibitive costs associated with full logical encoding.
Contributing authors on the study include Haoran Liao, Gavin S. Hartnett, Ashish Kakkar, Adrian Tan, Michael Hush, Pranav S. Mundada, Michael J. Biercuk, and Yuval Baum.