Insider Brief
- A new arXiv study from researchers at Caltech, MIT, Google Quantum AI, and AWS proposes a framework for identifying genuine quantum advantages and warns that some may be inherently unpredictable without quantum computers.
- The paper outlines five keystone properties of a true advantage — predictability, typicality, robustness, verifiability, and usefulness — and divides quantum advantages into four realms: computation, learning and sensing, cryptography and communication, and space advantages.
- The researchers report that proving and sustaining quantum advantage is complicated by advances in classical methods, noise in quantum systems and the possibility that the most transformative applications may only emerge once large-scale quantum technologies are deployed.
The quantum industry is intently focusing on proving that quantum technology can add value to solving real-world problems. That’s led to a flurry of research into use cases in a spectrum of diverse fields, from drug discovery to package delivery.
A team of researchers argues that quantum advantage is far from straightforward — and that some of the most significant advances in quantum technology may be ones scientists cannot yet imagine. In fact, they suggest it might take a powerful quantum computer to determine whether a quantum advantage exists in the first place.
According to a new study posted to arXiv, researchers from Caltech, MIT, Google Quantum AI and AWS examine the idea of “quantum advantage” — the point at which quantum systems can decisively outperform all possible classical approaches — and offers a structured framework for determining when an advantage is real, practical and not just an illusion.

The team writes: “We prove that some quantum advantages are inherently unpredictable using classical resources alone, suggesting a landscape far richer than what we can currently foresee. While mathematical rigor remains our indispensable guide, the ultimate power of quantum technologies may emerge from advantages we cannot yet conceive.”
Separating Reality from Illusion
The researchers warn that quantum advantage is a more slippery concept than many public announcements suggest. Demonstrations that seem to prove an advantage sometimes collapse when a previously unknown classical algorithm achieves the same performance. This “pseudo-advantage” problem has cropped up repeatedly, from early quantum recommendation systems to highly publicized claims of exponential speedups in data analysis.
According to the study, rigorous mathematical analysis and realistic benchmarking are the best tools for distinguishing between true breakthroughs and mirages. Without them, investors, policymakers, and even researchers risk misjudging where quantum technology stands — and where it should be headed.
Five Keystone Properties of a True Advantage
The researchers identify five “keystone” properties in the study that an ideal quantum advantage should have:
- Predictability — backed by strong theoretical or empirical evidence before the necessary hardware exists.
- Typicality — applicable to a broad class of real-world problems rather than rare, contrived cases.
- Robustness — able to survive the noise, imperfections, and other constraints of real-world operation.
- Verifiability — allowing results to be efficiently checked for correctness.
- Usefulness — offering tangible value to a user who may not care whether the solution is quantum or classical.
Many proposed advantages, they note, have failed one or more of these tests. In sensing, for instance, quantum entanglement can theoretically improve measurement precision, but in practice, small amounts of noise often wipe out the gain. In computing, algorithms like Shor’s factoring remain compelling, but others have seen their headline speedups reduced to modest gains once classical methods improved.
Beyond Computing: Four Realms of Quantum Advantage
The paper divides quantum advantage into four “realms,” each with different underlying mechanisms and implications.
The first is computation, the most widely recognized arena for quantum superiority. According to the paper, quantum algorithms promise faster solutions to problems such as factoring large numbers, simulating quantum systems, or optimizing complex processes. These claims typically rest on unproven complexity-theoretic assumptions — in particular, that there exist problems solvable efficiently on a quantum computer that no classical computer can match. Shor’s factoring algorithm remains the best-known example. Yet the researchers caution that this territory is far from settled. Advances in classical algorithms and hardware continue to chip away at the gap, making the contest between the two paradigms a moving target rather than a fixed threshold.
Beyond raw computation, the learning and sensing realm taps into quantum mechanics’ direct connection to the physical world. Devices such as nanoscale magnetometers and ultra-precise atomic clocks operate at sensitivity levels dictated by the uncertainty principle, a bedrock law of physics rather than a computational conjecture. Because these limits are physically grounded, they are less vulnerable to sudden algorithmic leaps on the classical side. The paper covers the concept of “quantum learning agents” — integrated systems that combine quantum sensors, quantum memory and quantum processors to collect, store and analyze information coherently. In some scenarios, such agents can extract knowledge from exponentially fewer measurements than any classical counterpart, offering an edge that emerges not from theory alone but from the nature of the universe itself.
The team labels the third realm as cryptography and communication. This is where the advantage comes from the inviolable rules of quantum physics. Principles like the no-cloning theorem, which forbids the perfect copying of an unknown quantum state, and the inevitability of measurement disturbance form the foundation for unbreakable security schemes. Quantum key distribution allows two parties to establish a shared cryptographic key in a way that reveals any eavesdropping attempt. Certified randomness generation ensures the production of truly unpredictable numbers, while quantum-protected data deletion offers a guarantee — enforced by physics — that information has been permanently erased. Unlike computational advantages, which could erode if a better classical algorithm appears, these security benefits remain as long as quantum mechanics itself holds.
The fourth realm — and perhaps the least explored, the team notes — is what the researchers call space advantages. Quantum systems can encode information into the exponentially large state space of qubits, representing vast datasets in memory footprints far smaller than any classical equivalent. While the Holevo bound — a principle in quantum information theory that limits how much ordinary data you can recover from a quantum state — means you can’t fully extract everything you’ve stored, you can still use the compressed quantum state to solve certain problems faster than any known classical approach. This kind of advantage is especially useful when data arrives faster than it can be stored, allowing calculations to be done on the fly without ever keeping the entire dataset in classical memory.
Why Predicting Advantage Can Be Impossible
One of the more striking claims in the study is that some quantum advantages are inherently unpredictable using classical reasoning alone. The researchers report that, under widely believed assumptions in complexity theory, determining whether a specific quantum process outperforms the best classical method can itself be a task that requires a quantum computer.
This creates a paradox: mapping the full landscape of quantum advantage may require deploying the very quantum technologies whose power we are trying to assess. If correct, this means that some of the most valuable applications could emerge only after large-scale, general-purpose quantum systems are in place.
From Proof to Practice
It’s likely that there will be a shift in how advantages will be judged, according to the paper. In the early stages of the field, mathematical proofs and complexity-theoretic arguments dominate. But as quantum hardware becomes more capable, empirical performance — how fast, cheap and reliable a method is in practice — will matter more. Just as in classical computing, where many widely used algorithms are not the ones with the best theoretical guarantees, future quantum algorithms may win adoption for practical reasons even if their formal speedups are modest or unproven.
The team introduces the concept of “concept-to-solution” time, which is the full span from identifying a problem to delivering a working answer. A quantum method might be preferable if it shortens this timeline, even if a classical algorithm with similar run time exists but takes years to develop.
The study also points out that quantum methods could be valuable for reasons beyond speed. A quantum approach might provide a conceptually simpler or more natural framework for certain problems, making them easier to integrate into larger workflows. In science, conceptual simplicity has historically been a driver of progress, as with the heliocentric model of the solar system, which offered a clearer picture despite similar predictive accuracy to some geocentric models, the team suggests.
Quantum systems may display emergent capabilities that were not predicted in advance, similar to large-scale machine learning models. These could arise from the sheer scale and complexity of the systems themselves, independent of any single algorithmic breakthrough.
The Future of Advantage?
The quantum advantage stakes are not merely academic, the researchers stress. Billions of dollars in public and private funding are flowing into quantum technologies, and governments worldwide are launching national programs, the team writes. A disciplined framework for recognizing genuine advantage can help ensure those resources are aimed at applications that will deliver real-world value.
The team writes: “As quantum technologies rapidly advance and require substantial investments of human effort and financial resources, establishing rigorous theoretical foundations becomes essential to ensure these efforts are well-directed. Quantum computing ventures now attract billions of dollars in investment, and governments worldwide launch major quantum initiatives, making the stakes for establishing genuine quantum advantages increasingly important. Without solid theoretical foundations, we risk pursuing technological directions that may ultimately prove less effective than anticipated, potentially misallocating significant resources and undermining confidence in quantum technologies.”
The biggest impacts may come from uses no one has yet thought of, according to the researchers. Just as the original inventors of the transistor could not have foreseen smartphones or global internet commerce, the most transformative quantum applications may only emerge when the technology is already woven into everyday tools.
“We look forward to a future where a broad spectrum of quantum advantages come to light: some empirically discovered, others conceptually transformative, and many fundamentally unpredictable using current technology,” they write. “Just as the most impactful applications of classical computers were unimaginable to their early pioneers, the true potential of quantum technology might only reveal itself through a journey of development and discovery.”
For a deeper, more technical look at the research, please read the paper on arXiv. Pre-print servers, like arXiv, help researchers distribute their findings more efficiently, particularly in fast-moving fields, such as quantum. However, pre-print studies are not officially peer reviewed, a key step in the scientific method.
The research team includes Hsin-Yuan Huang, of California Institute of Technology and Google Quantum AI; Soonwon Choi, of Massachusetts Institute of Technology; Jarrod R. McClean, of Google Quantum AI and John Preskill1, or AWS Center for Quantum Computing.



