Insider Brief
- Google Quantum AI researchers say building a fault-tolerant quantum computer is possible with superconducting qubits, but it will require major advances in materials science, manufacturing, system design, and error mitigation.
- The study identifies critical obstacles including performance-limiting material defects, complex qubit calibration needs, and inadequate infrastructure for scaling cryogenic systems to support millions of components.
- Researchers call for sustained industry-academic collaboration to overcome hardware and integration challenges, likening the effort to building mega-science projects such as CERN or LIGO.
- Image: The roadmap features six milestones towards building a fault-tolerant quantum computer (https://quantumai.google/roadmap).
Google Quantum AI researchers say that building a fault-tolerant quantum computer using superconducting qubits is achievable 0– but not without rethinking everything from materials science to system integration. In a new Nature Electronics study, the team outlined the scale of the challenge and the technical roadblocks that must be cleared before such machines can outperform today’s classical supercomputers on practical tasks.
Superconducting qubits are one of the most advanced technologies for building quantum computers. They can be fabricated using techniques similar to those in the semiconductor industry, allowing for precise design and integration. However, as Anthony Megrant and Yu Chen of Google Quantum AI explain, going from today’s hundreds of qubits to millions will require advances in materials, hardware testing and system architecture. Despite progress, fundamental limits imposed by materials defects, complexity in tuning individual components, and scaling cryogenic infrastructure all stand in the way, according to the study.
“Building a fault-tolerant quantum computer with superconducting qubits is comparable to constructing a mega-science facility such as CERN or the Laser Interferometer Gravitational-Wave Observatory (LIGO), with millions of components and complex cryogenic systems,” the researchers write. “Many of these components — from high-density wiring to control electronics — require years of dedicated development before reaching commercial production.”

Hardware Progress, But Challenges Remain
The Google Quantum AI roadmap lays out six milestones to build a fault-tolerant quantum computer. The first two — demonstrating quantum supremacy in 2019 and then operating with hundreds of qubits in 2023 — have been achieved. The next four require building a long-lived logical qubit, achieving a universal gate set, and scaling to large, error-corrected machines. Progress in reducing gate error rates and increasing qubit coherence times has been steady. But researchers caution that to reach the next stage, improvements in performance must be matched with improvements in scale.
Superconducting qubits, unlike naturally identical atoms, are man-made and show considerable variation in performance. This means each qubit needs to be individually tuned.
The researchers write: “Superconducting qubits can be thought of as artificial atoms, the properties of which — including transition frequencies and coupling strengths — can be engineered and tuned. This reconfigurable nature has been critical to achieving high performance, especially in integrated systems.”
While this flexibility allows engineers to avoid errors like crosstalk between qubits, it complicates scaling — requiring more control hardware and software, and raising costs.
Adding to the complexity are defects called two-level systems, tiny flaws in the materials used to build qubits. These defects can cause a qubit’s frequency to drift, lowering fidelity and introducing errors. Despite being known for decades, the physical origins of these defects remain poorly understood, making them hard to eliminate. Google’s researchers say understanding and mitigating these defects will require collaborative work across physics, chemistry, materials science, and engineering.
Materials Research and Fabrication Overhaul
Two-level systems are believed to arise from imperfections or contamination during chip fabrication, according to the study. Eliminating these will require changes in how quantum chips are made. Current techniques use organic materials that can leave behind impurities. New materials — such as improved superconductors — and better cleanroom processes could help, but need rigorous testing.
According to the researchers, one problem is current tools for characterizing materials defects are inefficient. Qubits themselves are used as sensors, which is time-consuming and yields sparse data. The study calls for developing faster, specialized tools that can analyze qubit materials during manufacturing and link surface features to performance issues.
Standardized sensors, like modified transmon qubits designed to measure environmental interference, could also help create a shared testing framework for the quantum industry. Initiatives like the Boulder Cryogenic Quantum Testbed aim to fill this gap by offering standardized measurement services to hardware developers.
Mitigation Strategies Exist — But Don’t Scale Easily

In the meantime, researchers use mitigation techniques to reduce the impact of defects. One common method is frequency optimization, where software algorithms search for the best operating frequency for each qubit and coupler. While effective in small systems, the method requires complex modeling and computation, which may not scale well.
Other methods include tuning frequencies with microwave fields or electric fields. But these require extra hardware or have limited flexibility, again posing challenges for large-scale systems.
Scaling to Supercomputer-Sized Systems
A fault-tolerant quantum computer will need to match the scale of modern supercomputers, with millions of components operating at temperatures near absolute zero. Building such systems means rethinking their architecture.
Because current cryogenic systems can only host a few thousand qubits and take days to cycle between hot and cold states, Google proposes a modular design. Instead of one giant machine, smaller, self-contained modules would each house a portion of the total system. This approach would reduce maintenance time and cost, and allow individual modules to be tested and replaced without needing to shut down the entire system.
However, this modularity will only work if performance requirements can be translated from system-wide goals down to individual modules. Testing such large numbers of components will require new high-throughput tools. Today’s test infrastructure, borrowed from classical chipmaking, isn’t yet adapted for quantum hardware — particularly for testing at millikelvin temperatures.
Integration Exposes New Problems
Even as the field grows, new challenges are emerging. As systems scale, previously negligible issues — such as parasitic couplings and control signal interference — begin to affect overall system behavior.
Experiments with large processors like Sycamore and its successor, Willow, have revealed new types of errors that affect groups of qubits simultaneously. For example, leakage errors, which is where a qubit’s state escapes the defined computational space, can spread and cause correlated errors across the system, undermining error correction methods.
Although not always listed as a source of noise, even cosmic rays have emerged as a threat. These high-energy particles can disrupt qubits in large-scale systems, setting a limit on performance. Research teams are now developing techniques such as junction gap engineering and leakage removal circuits to mitigate these new error sources.
Fundamental Research and Industry Need Each Other
Much of the progress in superconducting qubits has come from collaboration. Academic researchers invented the transmon qubit, improved materials, and tested mitigation strategies long before industry picked them up. Google’s Sycamore processor used academic research on tunable couplers to build a high-fidelity, large-qubit array. Its successor, Willow, benefits from university-led improvements in qubit coherence and fabrication.
Looking forward, Megrant and Chen argue that building a fault-tolerant quantum computer will take a similar path to other mega-science projects. Both fundamental science and scalable engineering must advance together. Industry has the resources to build and integrate, while academia drives discovery.
They write: “As major industrial participants unveil the challenges associated with their roadmaps, we call for enhanced knowledge sharing and collaboration, leveraging the unique strengths of industrial and academic groups. Academia will drive innovation through fundamental research (exploring new materials and qubit designs, nurturing future scientists and investigating open challenges), while industry translates this research into scalable manufacturing, robust infrastructure and large-scale integration. This unified approach will foster a sustainable quantum ecosystem, enabling progress towards the first fault-tolerant quantum computer.”