Insider Brief
- Researchers from Trinity College Dublin conducted a study that developed a universal protocol to measure information leakage in idle qubits, addressing a core challenge in quantum computing known as the “protection-operation dilemma.”
- Experiments on IBM’s Falcon processors revealed small but measurable information loss during idling, with rare “bad qubits” exhibiting significant leakage.
- The protocol offers a scalable tool to evaluate hardware performance, paving the way for error mitigation strategies and testing on next-generation quantum systems.
When quantum computers aren’t actively calculating, they’re still at risk of losing information, a challenge that scientists at Trinity College Dublin have quantified in a four-month study on IBM’s Falcon quantum processors. Their findings, published in npj Quantum Informtion, shed light on a fundamental dilemma in quantum engineering: balancing the need to protect qubits during idle periods while enabling them to interact efficiently during calculations.
Key Findings and Implications
The study introduced a method to measure how much information “leaks” from qubits when they are idle — an important step in understanding the challenges of designing quantum systems that can scale up reliably, according to the Trinity College Dublin research team that included Alexander Nico-Katz, Nathan Keenan and John Goold. Using more than 3,500 experiments on IBM’s 27-qubit Falcon 5.11 processors, the team detected subtle but statistically significant information loss even when the qubits were not performing any computations.
This work highlights a vexing contradiction in quantum computing. Idle qubits must be isolated to prevent information from spreading to nearby qubits, but during computations, they must interact strongly to perform operations. Known as the “protection-operation dilemma,” this tension underscores the difficulty of maintaining qubit fidelity while scaling quantum computers to practical sizes.
The team emphasized the broader relevance of these findings in the paper, suggesting that the protocol developed by the team provides a universal framework to study these effects, offering insights into how to mitigate them in next-generation quantum hardware.
They write: “The near future of quantum computing promises dramatic scale-ups of system sizes, nascent error-correcting hardware, novel approaches to localizing information, and fledgling fault-tolerance. Our work provides a flexible, powerful, scalable protocol to quantify idle information loss in all these settings.”
Methodology
To study the problem, the team designed a scalable and device-agnostic protocol using concepts from quantum information theory. Central to the method is the “Holevo quantity,” a measure of how much classical information can be accessed from a quantum system. The researchers applied this to track how information initially localized to one qubit spread into others during idle periods.
In their experiments, they initialized a target qubit in one of two states and measured its interactions with nearby qubits over time. By comparing the amount of information retrievable from the target qubit alone versus from it and its neighbors, the team quantified how much information had leaked.
This approach required performing extensive state tomography, a process akin to taking a detailed snapshot of the quantum state of the system. Despite the complexity, the protocol scales efficiently, requiring analysis of only a few qubits at a time, regardless of the overall system size, the team reports.
Leakage And Bad Qubits
The experiments revealed two critical findings. First, information leakage, though small, was consistently observed across all tested devices. For the Falcon processors, the leakage amounted to roughly 0.2% of the information stored in a single qubit during the system’s readout time.
The team also found that some qubits exhibited significantly higher leakage rates, with more than 10% of their information spreading into neighboring qubits. These “bad qubits” were rare but highlighted vulnerabilities that manufacturers could address in future designs.
Small But Significant
While the observed leakage is small, it represents a fundamental limit on the fidelity of current quantum processors. Even modest levels of leakage could become significant in larger systems, particularly when computations require precise control over thousands of qubits. The team writes that this emphasizes the need for error mitigation techniques such as dynamical decoupling, many-body localization, or active error correction.
The protocol itself may have broader applications. As quantum systems grow in size and complexity, it can serve as a benchmark for evaluating the reliability of different hardware designs. It could also help refine theoretical models of how information behaves in many-qubit systems.
Limitations and Challenges
The study suggests a few limitations and areas of future research.The protocol assumes that qubits interact primarily with their nearest neighbors, which may not hold in all architectures. Additionally, the experiments were conducted under controlled conditions designed to minimize environmental noise and gate errors. Real-world quantum computations may face additional challenges not captured in these tests.
Another limitation is the reliance on IBM’s Falcon processors, which are the company’s older model processor. While the protocol is hardware-agnostic, researchers will likely conduct future tests to validate its generality.
The researchers plan to expand their work by applying the protocol to newer quantum processors with more qubits and advanced error-correction capabilities. They also aim to refine their analysis to account for more complex interactions, such as those involving non-local coupling between qubits.
For a deep, technical dive of this work, which this summary article can’t provide, please read the paper at npj Quantum Information.