Zurich Zurich

DARPA to Develop “Yardsticks” for Measuring Value of Future Quantum Information Systems

DARPA
DARPA
Quantum Source Quantum Source
DARPA
DARPA announced its Quantum Benchmarking program, which is aimed at re-inventing key quantum computing metrics, making those metrics testable, and estimating the required quantum and classical resources needed to reach critical performance thresholds.

Although universal fault-tolerant quantum computers – with millions of physical quantum bits (or qubits) – may be a decade or two away, quantum computing research continues apace. It has been hypothesized that quantum computers will one day revolutionize information processing across a host of military and civilian applications from pharmaceuticals discovery, to advanced batteries, to machine learning, to cryptography. A key missing element in the race toward fault-tolerant quantum systems, however, is meaningful metrics to quantify how useful or transformative large quantum computers will actually be once they exist.

To provide standards against which to measure quantum computing progress and drive current research toward specific goals, DARPA announced its Quantum Benchmarking program, according to a statement from the agency. Its aim is to re-invent key quantum computing metrics, make those metrics testable, and estimate the required quantum and classical resources needed to reach critical performance thresholds.

“It’s really about developing quantum computing yardsticks that can accurately measure what’s important to focus on in the race toward large, fault-tolerant quantum computers,” said Joe Altepeter, program manager in DARPA’s Defense Sciences Office. “Building a useful quantum computer is really hard, and it’s important to make sure we’re using the right metrics to guide our progress towards that goal. If building a useful quantum computer is like building the first rocket to the moon, we want to make sure we’re not quantifying progress toward that goal by measuring how high our planes can fly.”

For more than 20 years DARPA and other organizations have played a vital role in pioneering key quantum technologies for advanced quantum sensing and computing. DARPA is currently pursuing early wins in quantum computers by developing hybrid classical/intermediate-size “noisy” quantum systems that could leapfrog purely classical super computers in solving certain types of military-relevant problems. Quantum Benchmarking builds on this strong quantum foundation to create standards that will help direct future investments.

“Quantum Benchmarking is focused on the fundamental question: How will we know whether building a really big fault-tolerant quantum computer will revolutionize an industry?” Altepeter said. “Companies and government researchers are poised to make large quantum computing investments in the coming decades, but we don’t want to sprint ahead to build something and then try to figure out afterward if it will be useful for anything.”

Responsive Image

Coming up with effective metrics for large quantum computers is no simple task. Current quantum computing research is heavily siloed in companies and institutions, which often keep their work confidential. Without commonly agreed on standards to quantify the utility of a quantum “breakthrough,” it’s hard to know the value quantum research dollars are achieving. Quantum Benchmarking aims to predict the utility of quantum computers by attempting to solve three hard problems:

The first is reinventing key metrics. Quantum computer experts are not experts in the systems quantum computers will replace, so new communities will need to be built to calculate the gap between current state of the art and what quantum is capable of. Hundreds of applications will need to be to be distilled into 10 or fewer benchmarks, and metrics will need to have multi-dimensional scope.

The second challenge is to make metrics testable by creating “wind tunnels” for quantum computers, which currently don’t exist. Researchers will need to enable robust diagnostics at all scales, in order to benchmark computations that are classically intractable.

A third challenge is to estimate the required quantum and classical resources for a given task. Researchers will need to optimize and quantify high-level resources, which are analogous to the front-end compiler of a classical computer. They will need to map high-level algorithms to low-level hardware, akin to the back-end compiler of a classical computer. Finally, they will need to optimize and quantify low-level resources, which corresponds to transistors, gates, logic, control, and memory of classical computers.

“If we succeed, these benchmarks will serve as guide stars for quantum computer development,” Altepeter said.

The Quantum Benchmarking Broad Agency Announcement is available here: https://go.usa.gov/xHcke. A Proposers Day webinar for interested potential proposers is expected to be held in the coming weeks and will be announced on beta.SAM.gov.

For more market insights, check out our latest quantum computing news here.

Matt Swayne

With a several-decades long background in journalism and communications, Matt Swayne has worked as a science communicator for an R1 university for more than 12 years, specializing in translating high tech and deep tech for the general audience. He has served as a writer, editor and analyst at The Quantum Insider since its inception. In addition to his service as a science communicator, Matt also develops courses to improve the media and communications skills of scientists and has taught courses. [email protected]

Share this article:

Keep track of everything going on in the Quantum Technology Market.

In one place.

Related Articles

Join Our Newsletter