Insider Brief
- Quantum error correction is the essential engineering challenge standing between today’s noisy quantum computers and tomorrow’s fault-tolerant machines capable of solving commercially valuable problems beyond the reach of classical supercomputers.
- Unlike classical error correction, which simply copies bits, quantum error correction must protect fragile quantum states without measuring them directly – a constraint imposed by quantum mechanics that requires encoding one logical qubit across multiple physical qubits.
- Recent demonstrations from Google, IBM, Microsoft, and others show progress toward the critical threshold where adding more physical qubits reduces rather than increases errors, but scaling to the millions of physical qubits needed for useful applications remains years away.
- The field encompasses multiple competing approaches – from surface codes requiring thousands of physical qubits per logical qubit to newer LDPC codes from startups like Iceberg Quantum that could dramatically reduce overhead – with billions in investment riding on which approach proves most practical. Riverlane’s 2025 QEC Report, produced in partnership with Resonance, found that 95% of quantum professionals now view error correction as essential to scaling quantum computing.
The headlines about quantum computing tend to focus on impressive numbers: “Google achieves quantum supremacy.” “IBM unveils 1,000-qubit processor.” “Microsoft demonstrates topological qubits.” These announcements signal genuine progress in building quantum hardware, and they deserve the attention they receive.
But there’s a challenge lurking beneath these milestones that rarely makes headlines, even though it represents the single most important obstacle between today’s experimental quantum computers and tomorrow’s commercially transformative machines.
That challenge is error correction.

Current quantum computers – even those with impressive qubit counts – are fundamentally unreliable. Their qubits lose information within milliseconds, their operations introduce errors at rates thousands of times higher than classical computers, and their results must be repeated many times just to extract a usable answer. These machines exist in what researchers call the Noisy Intermediate-Scale Quantum (NISQ) era, a phase where quantum computers can demonstrate interesting physics but struggle to outperform classical systems on practical problems.
The path forward requires making those qubits work together to create stable, error-resistant logical qubits that can maintain quantum information long enough to run the complex algorithms that justify quantum computing’s promise. That’s what quantum error correction does – and why mastering it is the defining challenge of the decade for quantum computing companies.
For investors, understanding quantum error correction is critical. Companies that demonstrate real progress toward fault tolerance are the ones most likely to transition from research projects to revenue-generating products. For policymakers and business leaders, it signals which applications are genuinely near-term versus decades away. And for technologists, it represents one of the most elegant intersections of mathematics, physics, and engineering in modern computing.
This is quantum computing’s scalability problem – and solving it is what separates laboratory demonstrations from commercial reality.
What Is Quantum Error Correction?
Quantum error correction is a set of techniques for protecting quantum information from errors caused by decoherence, noise, and imperfect quantum operations. The goal is to encode logical qubits across multiple physical qubits in such a way that errors can be detected and corrected without destroying the quantum state.
But there are many constraints quantum mechanics imposes. In classical computing, error correction is simple: you copy a bit three times and take a majority vote. If one bit flips from 0 to 1 due to noise, the other two copies remain correct, and the original value can be recovered.
Quantum error correction can’t use this approach for multiple fundamental reasons. The no-cloning theorem states that you cannot create identical copies of an unknown quantum state and measuring a quantum state to check for errors collapses the superposition, destroying the very information you’re trying to protect.
These constraints seem to make error correction impossible. How do you detect errors without measuring the quantum state? How do you correct errors without knowing what the state was?
The breakthrough came in the mid-1990s when researchers including Peter Shor, Andrew Steane, and others demonstrated that quantum mechanics, despite appearing to forbid error correction, actually permits it through clever encoding schemes. The key insight is that you can measure whether an error occurred without measuring what the quantum state is. By encoding one logical qubit across multiple physical qubits and measuring only the correlations between them – not the individual qubit values – you can detect and correct errors while preserving the quantum information.
This sounds abstract, so let’s break it down.
The Core Idea: Redundancy Without Copying
Imagine you want to protect a single qubit in superposition: a state that can, in theory, be 0 and 1 with specific amplitudes. You can’t copy this state, but you can spread the information across multiple qubits in such a way that the collective state of the group encodes the original qubit’s information.
In the simplest quantum error correction code – Shor’s 9-qubit code – one logical qubit is encoded across nine physical qubits. The encoding creates correlations between these qubits such that if one physical qubit experiences an error (a bit flip, phase flip, or other disturbance), the error shows up as a change in the correlations, not in the logical qubit itself.
By measuring these correlations – called syndrome measurements – you can determine which physical qubit suffered an error and what type of error occurred, all without learning anything about the logical qubit’s actual quantum state. Once you know where the error is, you can apply a correction operation to undo it, restoring the original quantum information.
The process is continuous. Errors happen constantly in quantum systems, so error correction runs in real time, repeatedly measuring syndromes, detecting errors, and applying corrections while the quantum computation proceeds. It’s less like spell-checking a finished document and more like maintaining balance on a bicycle – constant, active stabilization to prevent the system from falling into an unusable state.
Why Do Quantum Computers Need Error Correction?
The short answer is that quantum computers are extraordinarily fragile, and without error correction, they cannot scale to solve the problems we want them to solve.
To understand why, it helps to compare quantum and classical systems. A classical bit in a modern processor experiences an error rate of roughly one in a billion billion operations (10^-18). These errors are so rare that for most applications, you don’t need error correction at all, and when you do – in systems like memory chips or network transmission – simple techniques like parity checks suffice.
Quantum computers, by contrast, operate at error rates millions of times higher. A typical gate error rate ranges from 0.1% to 1% in current systems. That means one in every hundred to one in every thousand operations introduces an error. Over the course of a complex algorithm requiring millions of operations, errors accumulate catastrophically, rendering the result useless.
The Decoherence Problem
The root cause is decoherence: the process by which quantum systems lose their quantum properties due to interaction with the environment. Qubits are sensitive to temperature fluctuations, electromagnetic fields, vibrations, and even cosmic rays. Any uncontrolled interaction collapses superpositions, destroys entanglement, and introduces errors.
Different qubit technologies suffer from decoherence at different rates, but none are immune:
| Qubit Type | Typical Coherence Time | Primary Error Sources |
| Superconducting Qubits | 100 microseconds to 1 millisecond | Thermal noise, flux noise, dielectric loss |
| Trapped Ions | Seconds to minutes | Laser intensity fluctuations, heating, magnetic field drift |
| Photonic Qubits | Effectively unlimited (photons don’t decohere easily) | Photon loss, detector inefficiency, mode mismatch |
| Neutral Atoms | Seconds | Laser noise, atomic collisions, spontaneous emission |
| Topological Qubits | Theoretically very long (if realized) | Not yet demonstrated at scale |
Even the best systems lose quantum information within seconds, which sounds long until you consider that a useful quantum algorithm might require hours or days of continuous operation. Without active error correction, quantum computers cannot maintain coherent calculations long enough to solve real-world problems.
Why Classical Error Correction Doesn’t Work
Classical error correction techniques don’t translate to quantum systems. As mentioned earlier, you cannot copy quantum states, so redundancy-based schemes like triple modular redundancy are off the table. Measuring qubits to check for errors collapses superpositions, so you can’t directly inspect the state without destroying it.
Moreover, quantum errors are continuous rather than discrete. A classical bit flips from 0 to 1 or vice versa – a binary event. A quantum state can experience an infinite range of small rotations, phase shifts, or amplitude errors. Protecting against this continuous error space requires fundamentally different approaches than digital error correction.
Finally, classical computers benefit from error suppression: you can shield circuits, cool chips, and design robust logic gates that minimize errors at the hardware level. Quantum systems also use these techniques, but because qubits are so sensitive, hardware-level improvements alone cannot reduce error rates to the levels needed for fault-tolerant computation. Active error correction is unavoidable.
How Does Quantum Error Correction Actually Work?
The mechanics of quantum error correction vary depending on the specific code being used, but the general workflow follows a consistent pattern. Let’s walk through the process using a conceptual example.
Step 1: Encoding the Logical Qubit
You start with one logical qubit – the quantum information you want to protect. This might be a qubit in superposition representing part of a quantum algorithm’s calculation. The encoding process spreads this information across multiple physical qubits.
In a simple code, you might encode one logical qubit into three physical qubits. More sophisticated codes use many more: the surface code, one of the most studied approaches, encodes one logical qubit across dozens or hundreds of physical qubits, depending on the desired error rate.
The encoding creates entanglement between the physical qubits such that they collectively represent the logical qubit. No single physical qubit contains the full information – it’s distributed across the group, making the system resilient to errors on individual qubits.
Step 2: Performing Quantum Operations
Once encoded, you perform quantum gates on the logical qubit. However, you don’t operate on the logical qubit directly. Instead, you perform operations on the underlying physical qubits in such a way that the logical qubit evolves correctly.
This requires fault-tolerant gate implementations: carefully designed sequences of physical operations that, even if some physical gates fail, still produce the correct logical operation. Fault-tolerant gates are more complex and time-consuming than direct physical gates, but they protect the quantum information during computation.
Step 3: Measuring Error Syndromes
Errors inevitably occur – thermal noise causes a phase error, an imperfect gate introduces a rotation. These errors show up as changes in the correlations between physical qubits.
To detect errors, you perform syndrome measurements: special measurements that reveal information about correlations without revealing the logical qubit’s state. Think of it like checking whether two people’s stories match without learning what either person said. If the correlation has changed, you know an error occurred, but you don’t collapse the quantum superposition.
Syndrome measurements are performed repeatedly throughout the computation – often after every few gates – to catch errors as soon as they appear.
Step 4: Decoding and Correcting Errors
The syndrome measurements produce a pattern of results – a string of bits indicating which correlations changed. A classical computer, running alongside the quantum processor, analyzes this pattern to infer which physical qubits likely suffered errors and what type of errors occurred.
This decoding step is computationally intensive and time-sensitive. The classical computer must process syndrome data faster than new errors accumulate, or the system falls behind and error correction fails. Riverlane, a U.K.-based company focused entirely on quantum error correction, published a peer-reviewed paper in Nature Communications demonstrating its Local Clustering Decoder (LCD), a hardware-based decoder implemented on FPGA hardware that performs decoding rounds in under one microsecond. This kind of real-time, low-latency decoding is now recognized as a critical bottleneck that must be solved with specialized hardware rather than software alone.
Once the likely errors are identified, the classical computer sends instructions back to the quantum processor to apply correction operations: quantum gates that undo the detected errors. If the syndrome data is accurate and the corrections are applied quickly enough, the logical qubit returns to its correct state, and the computation continues.
Step 5: Repeating the Cycle
Error correction is not a one-time event. It runs continuously throughout the quantum computation, monitoring for errors, decoding syndromes, and applying corrections in real time. The process operates in parallel with the quantum algorithm, maintaining the integrity of the logical qubits while computation proceeds.
This is quantum error correction’s central insight: you don’t prevent errors from happening. You accept that errors will occur and build a system that detects and corrects them faster than they accumulate. It’s less like building a waterproof boat and more like constantly bailing water to stay afloat.
| Stage | Classical Analogy | Quantum Implementation |
| Encoding | Writing data to RAID array | Entangling physical qubits into logical qubit state |
| Computation | Performing operations on data | Fault-tolerant gates on physical qubits |
| Error Detection | Parity check or checksum | Syndrome measurements on qubit correlations |
| Error Correction | Restoring from backup | Classical decoding + correction gates |
| Verification | Re-checking data integrity | Continuous syndrome monitoring |
The table illustrates that while quantum error correction shares conceptual similarities with classical approaches, the implementation is fundamentally different due to quantum mechanics’ constraints.
What Are the Main Types of Quantum Error Correction Codes?
Researchers have developed dozens of quantum error correction codes, each with different trade-offs between the number of physical qubits required, the types of errors they protect against, and the complexity of implementing them on real hardware. Here are the most important categories:
Surface Codes
Surface codes are currently the most widely studied and implemented approach. They encode logical qubits by arranging physical qubits on a two-dimensional lattice, with syndrome measurements performed on pairs of neighboring qubits.
The surface code’s main advantage is locality: syndrome measurements only involve nearby qubits, making it compatible with hardware where long-range interactions are difficult. The code also has a high error threshold – meaning it can tolerate physical error rates up to roughly 1% while still reducing logical error rates.
The drawback is overhead. A surface code requires hundreds to thousands of physical qubits per logical qubit, depending on the desired logical error rate. For algorithms requiring thousands of logical qubits, the total physical qubit count quickly reaches millions.
Despite this overhead, surface codes are the leading candidate for near-term fault-tolerant quantum computers. Google, IBM, and other major players are building systems designed around surface code error correction.
Stabilizer Codes
Stabilizer codes are a broad family that includes surface codes as a special case. The category encompasses codes like the Steane code, Bacon-Shor code, and color codes, each with different geometric structures and error properties.
These codes use stabilizer measurements – analogous to the syndrome measurements described earlier – to detect errors without collapsing quantum states. The mathematical framework of stabilizer codes provides powerful tools for analyzing and designing error correction schemes, making them a workhorse of quantum error correction research.
Topological Codes
Topological codes, including surface codes and more exotic variants like the toric code, exploit the global properties of qubit arrangements to protect quantum information. Errors in topological codes correspond to defects in the qubit lattice, and correction operations move these defects until they annihilate each other or are pushed to the boundary of the system.
The advantage of topological codes is robustness: local errors don’t immediately corrupt the logical qubit. Multiple errors must occur in specific patterns to cause logical failures, making these codes resilient to noise. However, they also require large numbers of physical qubits and complex syndrome extraction circuits.
Low-Density Parity Check (LDPC) Codes
LDPC codes represent one of the most exciting recent developments in quantum error correction, inspired by classical error correction techniques. Unlike surface codes, which arrange qubits on a planar lattice, LDPC codes use more complex graph structures that allow each physical qubit to participate in multiple syndrome checks with non-neighboring qubits.
The key advantage is dramatically reduced overhead. Recent theoretical and experimental work suggests that quantum LDPC codes could achieve fault tolerance with significantly fewer physical qubits per logical qubit than surface codes – potentially reducing the resource requirements by an order of magnitude or more.
Iceberg Quantum, a Sydney-based startup founded by researchers from the University of Sydney, is at the forefront of this approach. In February 2026, Iceberg unveiled its Pinnacle fault-tolerant architecture, claiming it can reduce the physical qubits needed to break RSA-2048 encryption from over a million to under 100,000 – a tenfold reduction under standard hardware assumptions. The company raised $6 million in a seed round led by LocalGlobe with Blackbird and DCVC, and is already working with hardware partners including PsiQuantum (photonics), Diraq (spin qubits), Oxford Ionics, and IonQ (trapped ions).
Following IBM’s transition to qLDPC codes in 2024, Riverlane predicts other industry players will follow suit in 2026, yielding diverse fault-tolerant quantum computing architectures tailored to specific hardware platforms. This shift from surface codes toward LDPC codes may be one of the most consequential trends in quantum computing over the next several years.
The challenge is implementation. LDPC codes require long-range qubit interactions, which are difficult on most quantum hardware platforms. Trapped-ion hardware is particularly well-suited due to its long-range connectivity, excellent coherence times, and higher gate fidelities, which is why Oxford Ionics partnered with Iceberg Quantum specifically to integrate qLDPC codes into its systems.
Concatenated Codes
Concatenated codes protect quantum information by nesting one error correction code inside another. For example, you might encode a logical qubit using Shor’s 9-qubit code, then encode each of those nine physical qubits using another layer of the same code, creating 81 physical qubits protecting one doubly-encoded logical qubit.
Each layer of encoding suppresses errors exponentially, so concatenated codes can achieve very low logical error rates with modest physical resources – in theory. In practice, concatenated codes require extremely high-fidelity gates and syndrome measurements, making them difficult to implement on near-term hardware. Most researchers favor surface codes or LDPC codes instead.
| Code Type | Physical Qubits per Logical Qubit | Error Threshold | Primary Advantage | Primary Challenge |
| Surface Code | 100s to 1000s | ~1% | Local interactions, well-studied | High overhead |
| Stabilizer Codes | Varies by code | Varies | Mathematical framework | Implementation complexity |
| Topological Codes | 100s to 1000s | ~1% | Robust to local errors | High overhead, complex decoding |
| LDPC Codes | Potentially 10s to 100s | ~1% (theoretical) | Lower overhead | Requires long-range connectivity |
| Concatenated Codes | Exponential growth per layer | Very high | Exponential error suppression | Extremely demanding hardware requirements |
The diversity of codes reflects the fact that the optimal approach depends on the underlying hardware platform, the types of errors it experiences, and the specific application being targeted. There is no universal “best” quantum error correction code – only trade-offs.
What Is the Difference Between Physical and Logical Qubits?
Understanding the distinction between physical and logical qubits is essential for interpreting quantum computing announcements and assessing companies’ progress toward fault tolerance.
Physical qubits are the actual quantum systems that hardware engineers build and control: individual superconducting circuits, trapped ions, neutral atoms, or other quantum objects. These are the qubits that appear in headlines when companies announce “1,000-qubit processors.” Physical qubits are noisy, error-prone, and short-lived.
Logical qubits are the error-corrected qubits that emerge from combining many physical qubits through quantum error correction. A logical qubit is not a physical object but a mathematical abstraction: the quantum information protected by a collection of physical qubits. Logical qubits are stable, reliable, and capable of sustaining long computations without accumulating errors.
The relationship between physical and logical qubits is the resource overhead of quantum error correction. Current estimates suggest that achieving logical error rates low enough for commercially useful algorithms will require somewhere between 100 and 10,000 physical qubits per logical qubit, depending on the error correction code used and the physical error rates of the underlying hardware. Iceberg Quantum’s work on LDPC codes aims to push that ratio significantly lower, which is why their Pinnacle architecture attracted attention for claiming RSA-2048 could be broken with under 100,000 total physical qubits.
This means that a quantum computer with 1,000 physical qubits might only provide 10 to 100 error-corrected logical qubits – and possibly far fewer if error rates are high or the code is inefficient. For algorithms requiring thousands of logical qubits – like breaking RSA encryption or simulating complex molecules – physical qubit counts must reach millions under current surface code approaches.
Why Logical Qubits Matter More Than Physical Qubits
When evaluating quantum computing progress, logical qubits are the more meaningful metric. A machine with 10 high-quality logical qubits can perform longer, more complex calculations than a machine with 1,000 noisy physical qubits.
This is why recent demonstrations from Google, IBM, and others focus not just on qubit counts but on achieving error correction milestones: showing that logical error rates decrease as more physical qubits are added, demonstrating fault-tolerant operations on logical qubits, and scaling to multiple logical qubits working together.
Riverlane has proposed the concept of “QuOps” (error-free Quantum Operations) as a transparent, measurable standard for understanding what any quantum system can reliably achieve, moving beyond ambiguous terms like “quantum advantage.” This kind of standardized metric helps investors and business leaders compare different quantum computing approaches on a level playing field.
These milestones signal progress toward the threshold where quantum computers transition from scientific curiosities to practical tools. Companies that achieve this transition first will capture the early commercial opportunities, making logical qubit development – not just physical qubit scaling – the key competitive metric.
Who Is Leading Quantum Error Correction Development?
Quantum error correction research spans academia, national laboratories, and private companies, with different players leading in different areas. The Quantum Error Correction Report 2025, published by Riverlane in partnership with Resonance, found that research into QEC codes has exploded, with 120 new peer-reviewed papers published in the first 10 months of 2025, up from 36 in 2024. Here’s the landscape:
Technology Giants
Google Quantum AI made headlines in December 2024 with the announcement of its Willow chip, demonstrating that adding more physical qubits reduced logical error rates – a critical milestone called “below-threshold error correction.” The company continues to push surface code implementations and is building systems targeting millions of physical qubits.
IBM has committed to a roadmap targeting fault-tolerant quantum computing by 2033. The company’s approach emphasizes modular architectures that link multiple quantum processors together, combining surface codes with hardware improvements to scale logical qubit counts while managing error rates. IBM’s 2024 transition to qLDPC codes signaled a broader industry shift away from surface-code-only strategies.
Microsoft pursues a different strategy based on topological qubits, which theoretically offer inherent error protection. While the company has not yet demonstrated large-scale topological qubits, it invests heavily in error correction research and collaborates with academic groups on topological code development.
Amazon Web Services (AWS), through its Quantum Solutions Lab and partnerships with hardware providers, supports error correction research across multiple platforms and provides cloud-based access to error correction simulations.
Atom Computing and QuEra, both focused on neutral atom quantum computers, are exploring error correction schemes tailored to their hardware’s strengths, including long coherence times and flexible connectivity.
Quantum Error Correction Specialists
Riverlane, based in Cambridge, U.K., is the leading company focused entirely on quantum error correction. The company builds the “error correction stack” for quantum computing, including its Deltaflow decoding platform and Deltakit open-source toolkit. Riverlane’s hardware decoder published in Nature Communications demonstrated real-time decoding in under one microsecond on FPGA hardware. In December 2025, the company opened a QEC research hub in Delft with Professor Barbara Terhal, and its Deltaflow 3 platform, expected in late 2026, will introduce “streaming logic” for continuous error correction during computation. Riverlane’s 2025 QEC Survey of over 300 quantum professionals found that 95% view QEC as essential, with 2028 emerging as an informal industry deadline for integration.
Iceberg Quantum, based in Sydney, Australia, designs fault-tolerant architectures based on LDPC codes. Founded by PhD researchers from the University of Sydney under Professor Stephen Bartlett, the company launched in March 2025 with a PsiQuantum partnership and has since partnered with Oxford Ionics, Diraq, and IonQ. Its February 2026 Pinnacle architecture claims to reduce the physical qubits needed for RSA-2048 by tenfold, backed by a $6 million seed round.
Quantum Computing Hardware Companies
IonQ, a leader in trapped-ion quantum computing, has demonstrated error detection and correction on its hardware and is working toward fault-tolerant systems. The company emphasizes the high fidelity of ion trap operations as an advantage for reaching error correction thresholds.
Rigetti Computing develops superconducting quantum processors and has published research on surface code implementations compatible with its modular chiplet architecture.
PsiQuantum, which is building a photonic quantum computer, claims that photonics offers inherent advantages for error correction due to photons’ long coherence times and low error rates. The company’s partnership with Iceberg Quantum to develop next-generation fault-tolerant architectures for its platform underscores the importance of optimized QEC codes for hardware-specific systems.
Quantinuum, the company formed from Honeywell’s quantum division and Cambridge Quantum, uses trapped-ion technology and has demonstrated high-fidelity operations that approach the thresholds needed for practical error correction.
Academic and National Laboratory Research
Delft University of Technology in the Netherlands is a leader in topological qubit research and is now home to Riverlane’s European QEC hub, exploring exotic quantum states that could simplify error correction.
The University of Sydney in Australia, where Iceberg Quantum’s founders trained under Professor Stephen Bartlett, is a world leader in theoretical QEC research, particularly in LDPC codes and fault-tolerant architectures.
Yale University, the University of California, Berkeley, and MIT have strong quantum error correction groups that have published foundational work on surface codes, LDPC codes, and fault-tolerant gate implementations.
Los Alamos National Laboratory, Oak Ridge National Laboratory, and Argonne National Laboratory in the U.S. conduct error correction research as part of broader quantum computing programs, often in partnership with private companies.
The Max Planck Institute in Germany also contributes significant theoretical and experimental work on error correction schemes.
Government and International Initiatives
The U.S. National Quantum Initiative funds error correction research through agencies like the National Science Foundation (NSF) and the Department of Energy (DOE). The Department of Energy Quantum Leadership Act of 2025 proposes $2.5 billion in quantum funding across fiscal years 2026-2030.
The European Quantum Flagship supports projects across member states focused on error correction for photonic, superconducting, and trapped-ion systems.
China’s quantum computing programs, coordinated through national laboratories and universities, are actively pursuing error correction, though less information is publicly available compared to Western efforts. Japan now leads public quantum investment with nearly $8 billion committed, much of it allocated in 2025.
The Talent Challenge
The Riverlane-Resonance QEC Report 2025 warns of a severe talent crisis. Only an estimated 1,800 to 2,200 professionals currently specialize in QEC worldwide, but 5,000 to 16,000 will be needed by 2030. With 50-66% of quantum job openings remaining unfilled and specialized QEC training taking up to 10 years, the workforce gap may be the industry’s most pressing bottleneck alongside the technical challenges.
What Technical Milestones Must Quantum Error Correction Achieve?
The path from today’s noisy quantum computers to fault-tolerant systems capable of solving real-world problems requires achieving several critical milestones. Researchers and industry leaders use these benchmarks to track progress and set roadmaps:
1. Demonstrating Below-Threshold Error Correction
The most fundamental milestone is showing that logical error rates decrease exponentially as more physical qubits are added to the error correction code. This is called “below-threshold” operation, meaning the physical error rates are low enough that error correction helps more than it hurts.
Google’s Willow chip demonstrated this milestone in late 2024, showing that increasing the surface code distance from 3×3 to 5×5 to 7×7 reduced logical error rates by factors of 2 to 3 at each step. This was the first time below-threshold error correction was demonstrated convincingly in a superconducting quantum system.
IBM, IonQ, and others are working toward similar demonstrations on their hardware platforms. Until all major hardware types achieve this milestone, the scalability of quantum computing remains uncertain.
2. Creating Multiple Logical Qubits
Demonstrating error correction on a single logical qubit is impressive, but useful quantum algorithms require many logical qubits working together. The next milestone is showing that multiple logical qubits can be created, maintained, and operated simultaneously without interference.
This requires scaling up the number of physical qubits, improving syndrome measurement speeds, and managing the classical computing resources needed for real-time decoding across multiple codes running in parallel.
3. Implementing Fault-Tolerant Gate Sets
Not all quantum gates are created equal in fault-tolerant systems. Some gates – like the Clifford gates used in syndrome measurements – can be implemented fault-tolerantly with relatively low overhead. Others, like the T gate needed for universal quantum computation, are much more expensive to implement on error-corrected qubits.
Demonstrating a full universal gate set operating fault-tolerantly on logical qubits is essential. This includes showing that logical gate error rates are low enough to chain thousands or millions of gates together without accumulating errors.
4. Achieving Practical Logical Error Rates
For commercially useful algorithms, logical error rates need to reach roughly 10^-15 or lower – a billion times better than current physical error rates. Achieving this requires not just implementing error correction but optimizing codes, improving physical hardware, and scaling to large numbers of physical qubits.
This milestone is years away, but intermediate targets like 10^-6 or 10^-9 logical error rates would already enable longer, more complex algorithms than current NISQ systems can run.
5. Reducing Decoding Latency
Classical decoding – the process of analyzing syndrome measurements and determining which corrections to apply – must happen faster than errors accumulate. As quantum systems scale to millions of physical qubits, decoding latency becomes a critical bottleneck.
Riverlane’s FPGA-based decoder demonstrates the direction the field is moving: away from software decoders and toward specialized hardware (FPGAs and eventually ASICs) that can process syndromes in under one microsecond. The company’s upcoming Deltaflow 3 platform aims to enable continuous “streaming logic” that corrects errors while computation is happening.
6. Demonstrating End-to-End Fault-Tolerant Algorithms
The ultimate milestone is running a complete, useful quantum algorithm on error-corrected logical qubits from start to finish. This means initializing logical qubits, performing fault-tolerant gates, and reading out results – all while maintaining error correction throughout.
No one has achieved this yet. Current demonstrations show pieces of the puzzle – error correction on one logical qubit, fault-tolerant gates on small systems, or long coherence times without full error correction. Putting all the pieces together represents the transition from research to practical quantum computing.
| Milestone | Current Status | Estimated Timeline |
| Below-Threshold Error Correction | Demonstrated by Google (2024) on superconducting qubits | Achieved in leading systems, ongoing for other platforms |
| Multiple Logical Qubits | Small-scale demos (2-3 logical qubits) | 2-5 years for 10+ logical qubits |
| Fault-Tolerant Gate Sets | Clifford gates demonstrated; T gates more challenging | 3-7 years for full universal sets |
| Practical Logical Error Rates (10^-15) | Currently at 10^-2 to 10^-3 on physical qubits | 10-15 years with continued progress |
| Low Decoding Latency at Scale | Sub-microsecond demonstrated (Riverlane); scaling remains challenge | Ongoing research, hardware-dependent |
| End-to-End Fault-Tolerant Algorithms | Not yet demonstrated | 10-15 years for commercially relevant problems |
These timelines are estimates based on current progress and assume continued investment and no fundamental roadblocks. Breakthroughs in codes (like LDPC approaches from Iceberg Quantum), hardware, or algorithms could accelerate progress, while unforeseen challenges could delay it.
When Will Error-Corrected Quantum Computers Become Practical?
The timeline for fault-tolerant quantum computing depends on who you ask and what you mean by “practical.” Here’s how different communities view the path forward:
Optimistic Projections (5-10 Years)
Some quantum computing companies and researchers believe that fault-tolerant systems capable of outperforming classical computers on commercially relevant problems could emerge within the next five to ten years. This view assumes continued exponential improvement in physical qubit quality and count, successful implementation of surface codes or LDPC codes at scale, advances in classical decoding that keep pace with hardware growth, and targeted algorithms that require relatively few logical qubits (hundreds rather than thousands).
Companies like Google, IBM, and IonQ have public roadmaps targeting fault tolerance in this timeframe, though the specific applications and performance levels remain uncertain.
Moderate Projections (10-15 Years)
The broader research community tends toward a more conservative timeline, estimating that practical fault-tolerant quantum computers will require ten to fifteen years of continued development. This view acknowledges the enormous engineering challenge of scaling from today’s hundreds of physical qubits to the millions needed for useful applications, the need for multiple generations of hardware improvements to reach target error rates, the difficulty of implementing complex error correction codes in real-world systems with practical constraints, and the time required to develop and test fault-tolerant algorithms for specific industries.
This timeline aligns with the projected maturation of quantum computing from experimental science to commercial infrastructure. Riverlane’s survey found that 2028 has emerged as an informal industry deadline for QEC integration, suggesting the community expects meaningful progress within the next few years even if full fault tolerance takes longer.
Conservative Projections (15+ Years)
Some skeptics and researchers working on fundamental challenges argue that fault-tolerant quantum computing could take longer – potentially fifteen years or more – particularly for the most demanding applications like breaking RSA encryption or large-scale materials simulation. This perspective emphasizes the gap between current physical error rates (~0.1%-1%) and the rates needed for efficient error correction (~0.01% or lower), the overhead of current error correction codes (which could require 10,000 or more physical qubits per logical qubit in pessimistic scenarios), unforeseen technical challenges that emerge as systems scale, and the possibility that alternative approaches – like error mitigation, quantum-inspired classical algorithms, or entirely different computing paradigms – could prove more practical for certain problems.
What Practical Means
It’s worth noting that “practical fault-tolerant quantum computing” is not a binary milestone but a continuum. Early fault-tolerant systems with tens of logical qubits might solve problems in quantum chemistry or materials science before they tackle cryptography or financial optimization. Applications with shorter circuit depths will become feasible before those requiring millions of gates.
This means we should expect a gradual rollout rather than a sudden leap. Some industries will see benefits in the near term, while others wait longer. The timeline for quantum computing isn’t a single date but a series of thresholds, each unlocking new capabilities.
For investors and business leaders, the practical implication is to monitor error correction progress as the leading indicator of commercial viability. Companies demonstrating logical qubits, below-threshold operation, and fault-tolerant gates are the ones moving toward applications that justify their valuations. And the companies building the error correction infrastructure – like Riverlane’s decoding stack and Iceberg Quantum’s LDPC architectures – are positioning themselves as essential enablers regardless of which hardware platform ultimately wins.
Frequently Asked Questions
What is quantum error correction?
Quantum error correction is a set of techniques for protecting quantum information from errors caused by decoherence and noise by encoding logical qubits across multiple physical qubits. Unlike classical error correction, which copies bits, quantum error correction must work around the no-cloning theorem and avoid measuring quantum states directly. It detects and corrects errors by measuring correlations between physical qubits (called syndromes) without collapsing the quantum superposition.
Why can’t quantum computers just use classical error correction?
Classical error correction relies on copying information and performing majority votes to detect errors. Quantum mechanics forbids copying unknown quantum states (the no-cloning theorem), and measuring qubits to check for errors collapses superpositions, destroying the quantum information. Quantum error correction must detect errors through indirect measurements of qubit correlations while preserving the quantum state – a fundamentally different approach than classical methods.
How many physical qubits does it take to make one logical qubit?
The number varies widely depending on the error correction code used and the physical error rates of the underlying hardware. Current estimates for surface codes suggest 100 to 1,000 physical qubits per logical qubit, though this could increase to several thousand for very demanding applications. Newer codes like quantum LDPC codes, championed by companies like Iceberg Quantum, might reduce overhead to tens or hundreds of physical qubits per logical qubit. Hardware with better physical error rates requires fewer physical qubits to achieve the same logical error rate.
Has anyone built a fully error-corrected quantum computer?
No. Current quantum computers operate in the Noisy Intermediate-Scale Quantum (NISQ) era, where error rates are too high for sustained error correction. However, researchers have demonstrated critical milestones, including Google’s 2024 Willow chip showing below-threshold error correction (where adding more physical qubits reduces logical error rates). Riverlane has demonstrated real-time hardware decoding at sub-microsecond speeds. These demonstrations prove error correction works in principle, but building systems with hundreds or thousands of error-corrected logical qubits remains years away.
What is the difference between NISQ computers and fault-tolerant computers?
NISQ (Noisy Intermediate-Scale Quantum) computers have tens to thousands of physical qubits but lack full error correction, limiting them to short, error-prone calculations. Fault-tolerant quantum computers use quantum error correction to create stable logical qubits capable of running long, complex algorithms without accumulating errors. NISQ machines are useful for research and limited applications like quantum sensing experiments and quantum chemistry simulations, while fault-tolerant systems will enable the commercially transformative applications quantum computing promises.
Which quantum error correction code is best?
There is no universal “best” code – the optimal choice depends on the hardware platform, physical error rates, and application requirements. Surface codes are currently the most widely implemented because they require only local interactions between qubits, making them compatible with most hardware. However, they have high overhead (many physical qubits per logical qubit). Newer quantum LDPC codes, being developed by Iceberg Quantum and adopted by IBM, could reduce overhead significantly but require long-range qubit connectivity. The field continues to evolve, and following IBM’s transition to qLDPC codes in 2024, more companies are expected to follow in 2026.
When will quantum computers have enough logical qubits to break encryption?
Breaking RSA-2048 encryption has traditionally been estimated to require roughly 20 million physical qubits using surface codes. However, Iceberg Quantum’s Pinnacle architecture suggests this could potentially be achieved with under 100,000 physical qubits using optimized LDPC codes. Even with these advances, most experts estimate this capability is at least 10-15 years away. In the meantime, organizations are transitioning to post-quantum cryptography algorithms that resist quantum attacks, and quantum networking technologies like quantum key distribution offer security guaranteed by physics rather than computational hardness. The timeline remains highly uncertain and depends on continued progress in error correction and hardware scaling.
This article provides an educational overview of quantum error correction technology and is not intended as investment, technical implementation, or strategic planning advice.



