Zurich Zurich

Study Offers Roadmap For Building Tomorrow’s Quantum Supercomputers

quantum supercomputer
quantum supercomputer
Zurich Zurich

Insider Brief

  • A new study outlines a comprehensive roadmap for building utility-scale quantum supercomputers by integrating quantum processors with classical high-performance computing (HPC) systems.
  • The researchers emphasize hybrid quantum-classical systems, advanced qubit fabrication, and fault-tolerant error correction as essential components for scaling quantum devices to millions of qubits.
  • Applications like molecular simulations in quantum chemistry showcase the potential quantum advantage, with improved algorithms and hardware cutting resource demands by up to two orders of magnitude.

Scientists have mapped out a detailed strategy to scale quantum computers into supercomputers capable of solving problems beyond the reach of classical machines, according to a research study — How to Build a Quantum Supercomputer — posted on the pre-print server ArXiv. The team also explains their vision for this quantum supercomputer, which would require the integration of both quantum computers and classical high-performance computing systems.

The study, published by a team of researchers from several quantum startups and institutions like Hewlett Packard Labs, NASA Ames, and the University of Wisconsin, highlights both the challenges and opportunities in building utility-scale quantum systems and merging them with high performance computing systems.

They write: “Rather than replacing classical computers as general-purpose processors, quantum computers can be better understood as accelerators or coprocessors that can efficiently carry out specialized tasks within an HPC framework. Hybrid quantum–classical frameworks will be crucial not only in the near term—the noisy, intermediate scale quantum (NISQ) era [10]—but also for future fault-tolerant quantum computation (FTQC), as error-correction schemes will rely heavily on classical HPC and the number of logical qubits will be fairly small for the foreseeable future. To achieve true utility-scale quantum computing, successful integration with existing heterogeneous HPC infrastructures and the development of a hybrid quantum–classical full computing stack are necessary.”

The paper also breaks down engineering hurdles, such as qubit fabrication and fault-tolerant error correction, and proposes integrating quantum processors with high-performance computing (HPC) systems. The findings could help shape the trajectory of quantum computing, potentially unlocking applications in drug development, optimization, and cryptography.

Responsive Image

According to the researchers, building quantum supercomputers will need to emphasize a systems engineering approaches to bridge the gap between research-scale and practical systems. The team explains that systems engineering is the idea that many system parameters must be simultaneously optimized for a complex system. They further argue that any advances, or breakthroughs, will depend on adopting modern semiconductor tools, improving qubit quality, and designing hybrid quantum-classical architectures.

Scaling from Concept to Reality

Quantum computing, still largely experimental, faces scalability issues that prevent practical use. Existing quantum devices can handle problems involving up to hundreds of qubits, but utility-scale systems will require millions. Key findings from the study include:

  • Qubit Quality and Fabrication: The researchers call for advanced fabrication techniques using semiconductor processes to produce qubits with consistent quality. Unlike traditional electronics, quantum bits, or qubits, are highly error-prone. The study notes that current processes often yield uneven results, with a small percentage of qubits degrading overall system performance.
  • Hybrid Quantum-Classical Systems: The paper emphasizes the importance of pairing quantum computers with classical systems. By distributing workloads between classical and quantum processors, the researchers argue, hybrid systems can overcome bottlenecks in data management and processing.
  • Fault-Tolerant Design: Quantum error correction is essential for scaling. The study introduces approaches to manage errors in real time, such as integrating quantum decoders with GPUs to accelerate error detection and correction.
  • Wafer-Scale Integration: Borrowing from semiconductor manufacturing, the team proposes wafer-scale integration to embed thousands of qubits on a single chip. This would reduce communication delays and improve efficiency.

The authors write that these approaches, taken together, provide a realistic pathway to building quantum supercomputers.

Scaling Quantum Computing Faces Technical Hurdles at Every Level

The researchers acknowledge that building large-scale quantum computers demands innovative solutions to challenges that evolve with the size and complexity of the system. As processors scale from hundreds of physical qubits in today’s noisy intermediate-scale quantum (NISQ) machines to the millions required for utility-scale fault-tolerant quantum computing (FTQC), researchers face a mix of familiar — and potentially unprecedented — obstacles.

At smaller scales, systems with 100 to 1,000 physical qubits encounter challenges in hardware quality and stability. Variability in qubit performance, such as “fat-tail” error distributions — where a few poorly performing qubits degrade the system — poses significant risks. Frequent recalibration, driven by time-varying defects in two-level systems (TLS), hampers reliability, while external factors like cosmic rays exacerbate error rates.

Moving to 1,000 to 10,000 qubits introduces problems of integration and cost. The dense wiring needed for control and readout complicates scaling within dilution refrigerators, where space is limited. Cooling systems and control electronics become cost drivers, requiring semiconductor-inspired designs to reduce expense and power consumption.

For systems with 10,000 to 100,000 qubits, managing error correction becomes a bottleneck. Fault-tolerant protocols demand significant physical resources to correct errors faster than they accumulate. Cross-talk and gate errors further strain scalability, and debugging such complex systems grows increasingly difficult. Verification tools and diagnostic techniques, akin to those used in classical semiconductor circuits, must be adapted for quantum architectures.

Reaching scales of 100,000 to 1 million qubits will likely require distributed quantum computing, with interconnects between multiple quantum processors housed in separate dilution refrigerators. Such an approach introduces new technical hurdles, including managing inter-processor communication and dynamically allocating computational workloads.

Throughout these scales, the study emphasizes the need for adaptive solutions, such as hybrid quantum-classical systems, innovative error correction codes, and advanced fabrication techniques. The researchers argue that tackling these issues at every stage will be crucial to achieving utility-scale quantum computers capable of solving real-world problems.

Paving The Roadmap

The study is grounded in a systematic analysis of challenges at different scales of quantum systems. It outlines a step-by-step progression from today’s noisy intermediate-scale quantum (NISQ) systems to fault-tolerant machines with millions of qubits.

  1. Hardware Design: The researchers evaluated superconducting qubits, focusing on ways to improve coherence times, reduce errors, and optimize performance.
  2. Architectural Integration: By designing hybrid systems, the team seeks to make quantum processors act as accelerators rather than standalone devices. This approach mimics classical supercomputing systems that use GPUs to complement CPUs.
  3. Error Correction: The study highlights the importance of quantum error correction codes that can mitigate noise and prevent errors from cascading through calculations.

Quantum Resource Estimation Highlights Chemistry’s Role in Scaling

The researchers also explore how improvements in qubit quality can reduce hardware demands and computational overhead for practical applications. The study focuses on quantum resource estimation (QRE) for utility-scale systems, particularly in simulating electronic structures in molecules critical to chemistry and biology.

Quantum chemistry offers a key application for FTQC, the team write, as the accurate computation of molecular ground-state energies is vital for fields like drug discovery and materials science. The study examines two molecules of interest: para-benzyne, a candidate for cancer drug design and FeMoco, an iron-molybdenum cofactor central to nitrogen fixation in agriculture. Simulating such systems is beyond the reach of classical computers for large molecules, making them prime targets for quantum advantage.

The research evaluates two approaches to implementing the quantum phase estimation (QPE) algorithm: traditional Trotterization and modern qubitization techniques. Both methods translate molecular specifications into quantum circuits, but they differ in efficiency. Qubitization significantly reduces gate complexity, requiring fewer qubits and achieving faster runtimes compared to Trotterization.

For para-benzyne, the study finds that achieving chemical accuracy—errors below 1.6 milliHartree—requires 10 million to 100 million physical qubits depending on hardware quality. Simulating FeMoco, which involves larger active spaces, demands up to 150 million qubits and runtimes spanning days to years.

However, the team writes improving qubit fidelity and algorithmic design can reduce resource requirements by up to two orders of magnitude. On advanced hardware, qubitization can outperform Trotterization by a factor of 50 in runtime, according to the study.

The researchers write: “While there is no such sharp transition line between what is classically tractable and intractable (which highly depends on the extent of quantum correlations in the studied systems), a quantum advantage gradually appears for orbital numbers beyond Norb ≈ 50. These insights also motivate future research, namely, developing quantum heuristic algorithms that could bring the transition to a quantum advantage down to smaller problem sizes. This naturally follows the development of classical algorithms, where the transition from the guarantees of FCI to the heuristics of DMRG greatly reduced the necessary resources.”

The Challenges of Bridging Quantum and Classical

Integrating quantum computing into high-performance computing (HPC) systems presents significant design and operational challenges, according to the study. These stem from not just the physical and operational differences between quantum processors (QPUs) and classical components, but also as a result of the algorithmic hurdles in managing memory, data movement and program efficiency.

On the hardware side, quantum and classical components differ in reliability, operational timescales and communication bandwidth. Physically co-locating these resources within the same hardware node may be necessary to minimize latency and maximize synchronization for hybrid quantum-classical algorithms. For example, variational algorithms — where quantum and classical computations must interact frequently — are particularly sensitive to the overhead of data transfers, which can erode performance gains.

To address these challenges, researchers advocate for tight integration of QPUs with classical CPUs, GPUs, and FPGAs, all sharing system resources like memory and high-speed interconnects. This design ensures that hybrid systems can handle data-intensive tasks efficiently.

Equally important is the software infrastructure enabling users to program these systems seamlessly. Extending existing HPC programming environments, such as the HPE Cray Programming Environment (CPE), provides a natural solution. By incorporating tools for quantum programming, compiling, and dispatching within the familiar HPC framework, developers can build hybrid applications without extensive changes to classical workflows. This approach leverages existing infrastructure while supporting new quantum capabilities.

The integration strategy also focuses on modularity to accommodate diverse quantum technologies, from superconducting qubits to photonic systems. The team writes that quantum-specific software development kits (SDKs) such as CUDA-Q, Qiskit, Cirq, Pennylane and Classiq can interface with HPC systems, enabling scalable execution of quantum workloads.

By addressing both hardware and software challenges, the proposed quantum-HPC systems would be positioned to better deliver a user-friendly environment for hybrid computing.

Limitations and Challenges

While the study provides a comprehensive roadmap, there are limitations. The study suggests several of these challenges.

For example, scaling quantum systems is expensive, with significant costs tied to fabrication, cooling, and control systems. For instance, dilution refrigerators required for superconducting qubits are costly to operate and have size limitations.

The study also highlights the difficulty of designing error correction codes that can keep up with the demands of large-scale systems. Even with fault-tolerant designs, error rates induced by environmental factors, such as cosmic rays, present a challenge.

Quantum computing is not immune to supply chain issues, the researchers indicate, emphasizing the need for collaboration between chip manufacturers, system integrators, and quantum startups.

For the tech industry, the study offers guidance on collaborating across sectors. “The development of quantum computers must leverage expertise from the semiconductor, HPC, and quantum research communities,” the researchers write, advocating for consortia to accelerate progress.

Where Quantum Goes Next

The roadmap offers — or indicates — a few recommendations.

  • Standardizing Architectures: The authors call for the development of a universal quantum operating system that can manage workloads across different quantum and classical hardware platforms.
  • Collaborative Consortia: Building on existing models in classical computing, the study suggests forming cross-disciplinary teams to solve engineering bottlenecks.
  • Improved Algorithms: As hardware scales, developing efficient quantum algorithms will be crucial for unlocking practical applications.

Because the paper is comprehensive and technical with material that could not be used in this article that attempts to summarize key points, it’s recommended you read the paper for a deeper, more technical dive into the roadmap.

The research team included several institutions from all over the world. Teams include: Hewlett Packard Labs researchers Masoud Mohseni, K. Grace Johnson, Kirk M. Bresniker, Aniello Esposito, Marco Fiorentino, Archit Gajjar, Xin Zhan and Raymond G. Beausoleil. From Hewlett Packard Enterprise, the team includes Barbara Chapman and Soumitra Chatterjee. Contributors from 1QB Information Technologies (1QBit) in Canada include Artur Scherer, Gebremedhin A. Dagnew, Abdullah Khalid, Bohdan Kulchytskyy, Pooya Ronagh, Zak Webb, and Boyan Torosov. Collaborating from the University of Waterloo and Perimeter Institute for Theoretical Physics are Pooya Ronagh, under dual affiliations with the Institute for Quantum Computing and the Department of Physics and Astronomy. Representing Quantum Machines in Israel are Oded Wertheim, Ziv Steiner, and Yonatan Cohen. The Department of Physics at the University of Wisconsin–Madison is represented by Matthew Otten and Robert F. McDermott, while Kirk M. Bresniker and Kerem Y. Camsari contribute from the University of California, Santa Barbara’s Department of Electrical and Computer Engineering. Alan Ho and John M. Martinis are affiliated with Qolab in California.

Other contributors include Farah Fahim and Panagiotis Spentzouris from Fermi National Accelerator Laboratory, Marco Fiorentino from Applied Materials, and Davide Venturelli from both the USRA Research Institute for Advanced Computer Science and NASA Ames’ Quantum Artificial Intelligence Laboratory. Additional affiliations include Synopsys, represented by Igor L. Markov and John Sorebo, and Applied Materials, represented by Ruoyu Li and Robert J. Visser.

Matt Swayne

With a several-decades long background in journalism and communications, Matt Swayne has worked as a science communicator for an R1 university for more than 12 years, specializing in translating high tech and deep tech for the general audience. He has served as a writer, editor and analyst at The Quantum Insider since its inception. In addition to his service as a science communicator, Matt also develops courses to improve the media and communications skills of scientists and has taught courses. [email protected]

Share this article:

Keep track of everything going on in the Quantum Technology Market.

In one place.

Related Articles

Join Our Newsletter