Insider Brief
- IBM Quantum researchers demonstrated that certain types of noise, specifically nonunital noise, can extend the depth and usefulness of quantum computations beyond previously assumed limits.
 - The study introduces RESET protocols that recycle noisy ancilla qubits into cleaner states, allowing measurement-free error correction and potentially enabling longer computations on noisy devices.
 - While the approach suggests current quantum processors may be more powerful than expected, it faces challenges including extremely tight error thresholds and significant ancilla qubit overhead.
 - Image: Photo by Ron Lach on Pexels
 
For years, the consensus in quantum computing has been that noise is the enemy. Without error correction and mid-circuit measurements, noisy quantum devices were thought to be limited to shallow circuits that collapse after only logarithmic depth. But a new study from IBM Quantum researchers suggests this somewhat pessimistic view is incomplete. Under the right conditions, certain kinds of noise could actually help quantum computers sustain meaningful computations.
The work, published in PRX Quantum by Oles Shtanko and Kunal Sharma, both IBM scientists, shows that nonunital noise — a type of noise that has a directional bias, like amplitude damping that pushes qubits toward their ground state — can be harnessed to extend quantum computation much further than previously thought.
According to the researchers, noise is unavoidable in quantum hardware, mainly because every gate, every idle step and every interaction with the environment introduces errors, or the potential for errors. Over the years, theorists worked with a simplifying assumption: unital noise models, such as depolarizing noise, where errors randomly scramble qubit states without preference.

For those not familiar with the term, unital, a good analogy might be how cream is stirred into coffee — everything gets mixed evenly, and no spot is favored. Nonunital noise is like gravity acting on spilled marbles, instead of scattering randomly, they all tend to roll down toward the floor.
Under the unital noise model, circuits quickly lose coherence. After just logarithmic depth, the computation essentially becomes random and can be simulated efficiently on classical computers. This led to the prevailing belief that useful quantum computation requires error correction and measurements inside circuits, a technology still years from widespread deployment.
A Different Kind of Noise
Shtanko and Sharma challenged this narrative by asking: what if the noise isn’t unital? Unlike depolarizing channels, nonunital noise has a bias. Amplitude damping, for instance, nudges qubits toward the qubit’s ground state rather than scattering them randomly. Such noise is common in real hardware, but until now it had been underexplored as a computational resource.
The researchers built on a concept called the “quantum refrigerator,” introduced in earlier work. The idea is that dissipative processes — traditionally viewed as destructive — can instead be used to “cool” qubits, resetting them into cleaner states and suppressing entropy.
Using this insight, the IBM team designed circuits that use nonunital noise and ancillary qubits to perform “RESET” operations. These resets act as substitutes for measurements in traditional error correction, allowing systems to shed accumulated errors without direct readout.
Extending Computation Without Measurements
The key finding: local quantum circuits under nonunital noise can correct errors and extend computation to arbitrary depth, with only polylogarithmic overhead in both qubit count and circuit depthc. Polylogarithmic means the cost increases very slowly, even when the problem size gets much larger. In practical terms, this means noisy devices could still run long computations, even without error correction in the conventional sense — if they are engineered to exploit the structure of their native noise.
As long as the noise is sufficiently weak and nonunital, the circuits remain computationally universal and almost as difficult for classical computers to simulate as ideal, noiseless ones.
The RESET protocol proposed in the study works in three stages. First, passive cooling: ancilla qubits are randomized, then exposed to noise that pushes them toward a predictable, partially polarized state. Second, algorithmic compression: a special circuit called a compound quantum compressor concentrates this polarization into a smaller set of qubits, effectively purifying them. Third, swapping: these cleaner qubits replace “dirty” ones in the main computation, refreshing the system.
Together, these steps allow the quantum device to recycle noisy ancillas into useful resources—something impossible under unital noise models.
Implications for Quantum Advantage?
There are a few important implications of the work, according to the researchers.
The paper suggests that current noisy quantum processors may be more powerful than theory once assumed. If their dominant errors are nonunital, then simulations of their behavior on classical computers are significantly harder, extending the frontier of quantum advantage. It also opens a pathway to measurement-free fault tolerance. Measurements are one of the hardest operations to implement reliably in many quantum platforms. A protocol that avoids them but still corrects errors could simplify the hardware roadmap.
Finally, the study reframes the role of noise. Instead of seeing it purely as an obstacle, engineers may be able to design circuits that integrate natural dissipation into their computational fabric, turning a bug into a feature.
Caveats and Challenges
The researchers are careful to note limitations and places where there will need to be further work.
The thresholds for noise strength are extremely tight — on the order of one error in 100,000 operations in some estimates. The overhead in ancilla qubits can also be massive, with theoretical requirements reaching millions in certain scenarios. While these are theoretical upper bounds that may be reduced in practice, they highlight the gulf between the proof-of-principle and practical implementations.
While nonunital noise helps, it must be weak and well-characterized. Too much dissipation still destroys the computation, and not all nonunital channels will behave in a helpful way.
With those limitations in mind, the study still counters a typical theme in quantum computing, which is often framed as an all-or-nothing race: until full error correction arrives, devices are stuck in a noisy purgatory. This study complicates that picture. It shows that the nature of the noise matters as much as the noise itself, and that certain types of dissipation can keep computations alive longer than expected. For the field, it’s both a warning and an opportunity.
A warning because benchmarking quantum devices with simplistic depolarizing models may underestimate their computational power.
An opportunity because embracing nonunital noise as a design element could enable new classes of algorithms and architectures in the near term.
The study also touches a deeper theme: the complexity of simulating natural quantum systems. Many physical processes—such as atoms interacting with a thermal bath—naturally involve nonunital noise.
If simulating these systems is classically intractable, it underscores the unique role of quantum computers in studying real-world, open-system physics. As Shtanko and Sharma conclude, the old claim that noisy quantum devices are only useful for shallow circuits “may be misleading.” With nonunital noise in the picture, even noisy processors could perform tasks that stretch beyond classical limits.
								
															
    
								


