Zurich Zurich

Guest Post:  The Unexaggerated Magic of Quantum – Part V Special Quantum Artificial Intelligence Edition  

Guest Post
Guest Post
Quantum Source Quantum Source

Guest Post By Shai Phillips

President, PSIRCH

Exposing the emptiness of the h-word: how claims of hyperbole ring hollow; and how nobody, not even today’s experts, can possibly know the limits of Quantum.

In a monthly segment in The Quantum Insider, PSIRCH’s President, Shai Phillips, will conduct a first-of-its-kind audit of broad industry-internal accusations of exaggeration in quantum computing and associated fields. How much truth there is to them will be the focus of this series.

Responsive Image

“In the last few months, we’ve seen research teams smash records that were, just a few months before, considered maybe not impossible, but certainly years, if not decades away.”

Matt Swayne, Editor – The Quantum Insider

  • Expert views in Quantum support the outlook that quantum computing is set to change the world, meaning far-reaching generalized projections regarding Quantum are valid and not hyperbolic.
  • There is strong evidence quantum computing will aid in artificial intelligence/ machine learning, including with exponential speedups when quantum data is involved, in training AI, and in generalizing, thus some skeptical statements by experts do not tell the full story, and can be misleading to various audiences.
  • Even expert claims of hyperbole, largely due to the impracticality of qualifying all remarks, can be inaccurate, ironically hyperbolic, and have the potential to retard progress in Quantum. Thus, a cessation of such claims will be beneficial.

Part V   –   The Biggest Debate in Quantum

For various reasons, many quantum experts who participated in conversations that led to the findings and discussions in this series were not able to be quoted by name. I am truly grateful to these unsung heroes for their valuable help in getting to the bottom of some highly complex subject matter. This series is dedicated to them.

EXPONENTIAL SPEEDUPS IN MACHINE LEARNING WITH QUANTUM DATA

In the quantum industry, there are some things on which most experts agree: on one hand, the prospect that quantum computing will eventually vastly outperform classical computers in simulating nature (as intended by Richard Feynman, Paul Benioff, et al. when first proposing the quantum computer during the first conference on the physics of computation). As such, when it comes to domains like computational chemistry, drug design, performance materials, and so on, there is a majority of support, even amongst notable skeptics, that quantum will be a superior model of computation. That is the expert consensus, and that alone is already extremely significant in projecting a world-changing role for quantum computing.

Contrarily, when it comes to the prospect of solving certain classes of exponentially hard and classically intractable computational problems, despite vigorous debate (see previous chapters), there is a general consensus amongst experts that, in a strict sense, quantum computers probably won’t be able to overcome classical limitations fully in tackling such hurdles, such as fully solving NP-complete or NP-hard problems. That said, it’s not for lack of trying. Just last month, two preprints were released claiming to do exactly that. It still remains to be seen if these papers hold up to scrutiny. It is widely suspected they won’t, but if they do, it will be fairly world-changing. Until then, that’s the expert consensus to the negative side for quantum computing.

Finally, right in the middle of these two agreements, is a point of major controversy – the idea that quantum computing will aid significantly in artificial intelligence. There are some who disagree with this prospect, and others who see it as highly likely, even going so far as to base their life’s work on it. One thing we can say: there is extremely compelling evidence that the optimistic experts are in the right on this, and some of the skeptics have been caught off-guard on occasion. Let’s take a look at one such example:

At QCWare’s 2021 Q2B conference, Professor Aaronson vociferously dismissed the idea quantum machine learning would see exponential speed-ups. However, as of late 2021, we seem to have a strong showing that such a statement may not represent the truth of the matter, at least not in full. In my early research, I was immediately pointed in the direction of a paper purporting exactly this exponential speed-up in machine learning: “Quantum Advantage in Learning from Experiments”, by Huang, Preskill, et al. In my chat with the UT Austin Professor, Aaronson called into question what we sometimes might mean when we use the term “exponential”. However, one useful distinction did arise at the Q2B conference itself – the dichotomy between classical and quantum data when referring to quantum machine learning: right at the end of the Q&A (easy to miss!), an astute audience member quite literally afforded the Professor an opportunity to qualify his earlier statement when taking quantum data into account, and Aaronson agreed, without hesitation, that when quantum data is involved, we may indeed see exponential speedups with quantum machine learning. (This will be especially significant when the capability to exploit networked quantum devices in a more mature quantum ecosystem comes into play, namely quantum sensors. In that scenario, the applications are vast.)

Perhaps that is half the battle right there – instead of attempting to translate classical problems into quantum equivalents, we need to begin thinking quantumly even at the problem framing stage. In sum, despite what a transient onlooker might have taken from this Q2B appearance, it does seem there are, in certain circumstances, exponential speed-ups in quantum machine learning, so utilizing the h-word here was a perfect example of how claims of hyperbole in quantum computing have become so overblown as to be highly inappropriate. This is especially true in designating such a negative assertion “a fact”, as we saw. It seems to have been an unfair disparagement to say the least. Thankfully, due to an astute audience member, the oft-omitted caveat was allowed to surface, and put a different spin on it. Here, in particular, the emptiness of the h-word was especially vivid.

EXPONENTIAL SPEEDUPS IN TRAINING NEURAL NETWORKS

Until 2008, Shor’s algorithm really did stand alone. Enter the HHL (Harrow-Hassidim-Lloyd) quantum algorithm, for solving linear system problems, with exponential speedup over classical computation. According to qiskit.org: “HHL is thought to be massively useful in machine learning”, such as “in an optimized linear or non-linear binary classifier”. It goes onto say that “a quantum support vector machine can be used for big data classification and achieve exponential speedup over classical training due to the use of HHL.”

HHL does indeed provide an exponential speed up over classical algorithms, but in very a specific way, namely for sparse-matrix inversion, which can be used in machine learning. The encouraging part is that many are building on its success with applied efforts and derivative algorithms of promising utility. No need to go even beyond Wikipedia for this kind of basic information. Referencing Lu and Pan’s paper, Quantum Computer Runs the Most Practically Useful Quantum Algorithm’, the webpage notes: Due to the prevalence of linear systems in virtually all areas of science and engineering, the quantum algorithm for linear systems of equations has the potential for widespread applicability.” It goes onto offer examples like electromagnetic scattering (“This was one of the first end to end examples [2013] of how to use HHL to solve a concrete problem exponentially faster than the best-known classical algorithm”), then references machine learning specifically: “In June 2018, Zhao et al. developed an algorithm for performing Bayesian training of deep neural networks in quantum computers with an exponential speedup over classical training due to the use of the quantum algorithm”.

Now, I can be just as skeptical as the next person, and oftentimes more skeptical than your average bear, so I wasn’t about to simply take the word of Wikipedia for it, and I started perusing the actual paper for an explicit statement of this holy-grail achievement – an actual exponential speedup in Quantum Machine Learning – a that’s a bold claim indeed. Initially, I was worried my worst suspicions would be confirmed, as the paper claims explicitly only “at least a polynomial speedup” in its Gaussian process approach. This hardly equates to an exponential speedup, but I investigated further and found, from one of the smartest quantum experts I know (whose name I sadly don’t have permission to quote here), that there is in fact an exponential speedup! As the paper does assert this under certain circumstances, namely in the case of sparce covariance matrices. How restrictive this facet is will likely become an area for further research and development.

The “sparse and well-conditioned” restriction, however, has not seemed to hamper other efforts at showing quantum enhancements in training neural networks. I was pointed in the direction of one such paper by Pierre-Luc Dallaire Demers, CEO and Founder of Pauli Group. The paper: “Towards provably efficient quantum algorithms for large-scale machine-learning models” by Junyu Liu, Jens Eisert, et al., is also based on the exponential speedup in sparse matrix inversion of the HHL algorithm, and asserts a super-polynomial speedup (this term is often used interchangeably with “exponential speedup”) in the main machine learning algorithm of gradient descent, aiding in applications like Large Language Models (LLMs), so long as “the models are both sufficiently ‘dissipative and sparse’”. However, in this case, it goes one step further in showing exactly how an algorithm such as this can be useful near-term, despite the restriction: “we suggest to find a quantum enhancement at the early stage of training, before the error grows exponentially.” The paper also points to conclusiveness in the fact that quantum machine learning is provably exponentially faster than its classical counterpart in a mature ecosystem: “for fault-tolerant settings of quantum machine learning problems, rigorous super-polynomial quantum speedups can actually be proven for highly structured problems.” Lastly, it concludes that the primary application for its algorithms would be the training of neural networks, showing how compelling the evidence is that quantum computing will play an important role in Artificial Intelligence as both fields advance in tandem. Hand-in-hand, they will take us further than we imagined.

CIRCUMVENTING & REDUCING CLASSICAL DATA INPUT BOTTLENECKS

Still, when it comes to QML, we can’t claim victory just yet, as there are major hurdles to overcome: First, there is an extra factor in the runtime relative to classical. If this extra parameter is too big, we have a problem. But this can often be controlled, so it only ruins things in certain cases; Second, when solving a problem with HHL, the sought result is not what is immediately yielded, and there’s a tricky extra step that requires a sort of tomography type sampling, given that it’s impossible to access quantum memory. So, the output is a delicate issue. Mostly, though, it’s the input we need to worry about. This third and final item is the most challenging, as inputs can require the same number of qubits as classical bits to store – that’s big! There are ways around this, such as with QRAM models or, circling back to our previous solution, utilizing quantum rather than classical data.

Despite these challenges, if we consider present day effort, we see just how fast the pace of advancement is careening in this area specifically. A flurry of announcements landed: First, from IonQ, milestones to use quantum computing for practical application in generative AI saw both progress in modelling human cognition, as well as a partnership with Zapata to continue exploiting quantum computing for generative AI. The CEO, Peter Chapman, at the opening ceremony for National Quantum Laboratory (QLab) with University of Maryland, also elaborated on the established strategic partnership with Hyundai, by announcing there were combined efforts to enhance the cognitive capabilities (the brain) of the famed Boston Dynamics humanoid robots, which many of us will have seen dancing around on YouTube clips. These items were intersected by news from Quantinuum that their Chief Scientist, Bob Coecke, in partnership with Masanao Ozawa of Japan’s Chubu University, are also working to apply quantum to model human cognition, in areas like human decision making. With Coecke in the mix, it’s especially promising, as it may circumvent the loading bottleneck and big data limitation that until now have been expected to impose heavy constraints on quantum computers. If Coecke succeeds to apply the rubric of category theory, utilizing a structural method that more closely resembles simulation than current models of AI, it will be revolutionary for the field. In talking to experts, some questioned if specific LLM applications would ever see benefits from QML, particularly following the celebrated break of the big ChatGPT news. However, Coecke, who has been working hard to advance QNLP (quantum natural language processing), saw it differently: in Coecke’s unquestionably expert view, language falls squarely into quantum-beneficiary territory.

The aforementioned loading bottleneck, referring to classical data input and processing, was emphasized scholastically in an authoritative paper by Hoefler, Haner, and Troyer: “Disentangling Hype from Practicality: On Realistically Achieving Quantum Advantage.”

Before we delve into this paper, a couple of things to note: first, my entire article series is dedicated to advocating for elimination of use of the h-word, so to see it in a serious academic paper is rather disheartening, and simultaneously betrays the possibility of a prejudicial agenda on the part of its authors. “Hype” is neither an academic word nor a technical concept, and as such, has no place in a scientific paper. It calls into question the authors’ motives and thus the paper’s reliability from a purely psychological standpoint. For any paper entitled “Disentangling Hype”, it is natural to ask: were these authors hell-bent on proving this predetermined conclusion, or did they design experimentation in a way that would allow for conclusions they were not looking prove, and that did not fit their narrative? On numerous occasions, perhaps overly so, the paper repeatedly insists its assumptions err on the side of optimism, and certainly technical considerations in the paper do lean that way, but the overall framing of the paper does seem molded towards reinforcing less-than-optimistic attitudes. For instance, why not simply avoid the h-word entirely and simply call the paper: “identifying likely near-term applications for quantum advantage” or “requisite speedups to achieve quantum advantage”. That type of verbiage takes a neutral stance, and would therefore seem more credible to me.

Still, even with this less-than-ideal presentation, a close read of the paper shows that, while the attitude leans ironically pessimistic in some ways (though certainly not in all ways), the actual substance can’t help but tell a different story. For example, the paper asserts that: “A large range of problem areas with quadratic quantum speedups, such as many machine learning training approaches… will not achieve quantum advantage with current quantum algorithms in the foreseeable future.” Look closely and note two items:

First, the paper categorically does not state that machine learning will not benefit from quantum advantage. All it asserts is machine learning approaches that are limited to a quadratic quantum speedup will not benefit from quantum advantage. We have just seen multiple examples above of exponential quantum speedups in machine learning training. So, despite what one might hear in passing, there is absolutely no refutation here of QML.

Second, note also that this taciturn conclusion is contingent on “current algorithms”, and obviously cannot account for algorithmic enhancements. The thing is, the entire progress of the field of quantum computation is predicated on algorithmic improvements. If we stick with the current state of quantum algorithms, we’re pretty much going nowhere anyway, save a couple of exceptions, namely Shor’s and HHL.

Admittedly, the paper does on occasion wax optimistic, with corroborations of broadly accepted projections that there is “huge potential in material science and chemistry” for quantum computing. However, one of the paper’s principal premises is that quantum computing should be limited to “small data problems” for practicality, because of “the fundamental I/O bottleneck”, which we’ve already introduced above. There is broad consensus in the quantum community for this idea. However, even this limitation may not last long. We are already seeing good progress in overcoming it. Keep in mind that the paper is referring to classical data only. We’ve already shown how quantum data, with e.g. use of quantum sensors, does not suffer from this same issue, and circumvents this loading bottleneck. But it’s not just about circumventing the loading bottleneck. Even with classical data, there is light at the end of the tunnel. In fact, as of recently, the understanding of this aspect may have changed somewhat, such that it might not end up being the insurmountable obstacle so many believe it to be…

In late 2023, a paper by Jumade & Sawaya (Intel Corporation) on the novel AMLET algorithm (Automatic Multi-layer Loader Exploiting Tensor Networks), entitled: “Data is often loadable in short depth: quantum circuits from tensor networks for finance, images, fluids, and proteins”, it is asserted that: “classical problems may be more likely than previously thought to provide quantum advantage, even when the input step of loading data is explicitly considered”; that “the ‘input problem’ will likely not be an obstacle for many scientifically interesting classical workloads”; and that “several classes of real-world classical data can be loaded in short depth”. The paper also notes: “Relatedly, quantum random-access memory (QRAM) promises to partially mitigate the input problem, though such a solution would require more complex hardware and substantially more qubits”.

I spoke with one of the paper’s authors, Nicolas Sawaya, who remarked that while this paper was explicit in these assertions, and perhaps extends the idea of classical data loading for quantum computing as work that may continue to see enhancements, it is not the first published piece to float the idea. Sawaya pointed to previous publications that had already explored the notion, including: “Quantum pixel representations and compression for n-dimensional images” by Amankwah et al. and “Fast approximate quantum circuits for block encodings” by Camps and Van Beeuman. This is not to say that there aren’t major challenges ahead, but throwing in the towel on classical data loading in quantum computing at this juncture is highly premature to say the least.

The aforementioned paper limits itself to 10,000 logical qubits. Whether we can expect more in the next decade is a matter of who you ask, but I won’t go so far as to say this is a pessimistic projection. We are a long way off from that number. The paper insists that its conclusions are accurate unless we see algorithmic improvements, which we are already beginning to see. For instance, the paper notes that “Aaronson et al. show the achievable quantum speedup of unstructured black-box algorithms is limited to O(N4)”. As discussed in previous chapters of this article series, while this is not inaccurate, it can be misleading without taking into account recent discoveries in the literature re unstructured search, namely Yamakawa & Zhandry’s 2022 “Verifiable Quantum Advantage without Structure”.

We cannot exit this discussion without first considering a similar paper which addresses this same matter, by Babbush et al.: “Focus beyond quadratic speedups for error-corrected quantum advantage. Much like the previous paper, this one asserts that either cubic or quartic speedups at the least are needed for practical quantum advantage. It does not address machine learning specifically, but since the previous paper does, for this reason, we can tie them together here. Much like the above paper, this one is replete with caveats, such as: “quadratic speedups will not enable quantum advantage on early generations of such fault-tolerant devices unless there is a significant improvement on how we would realize quantum error correction.” Well, yes, we would hope there would be such a QEC improvement, or else, what are we all working on here? The paper also points out that its analysis addresses “the prospects for achieving quantum advantage via polynomial speedup for a broad class of algorithms, rather than for specific problems.” We have already seen, in previous chapters of this article series, that quantum experts assert, when it comes to “specific problems”, smaller speedups can indeed be very useful. Lastly, the paper states explicitly that “there might exist use cases involving quadratic speedups that defy the framework of this analysis.”

One final paper on the subject of classical data must also be examined before we close out this discussion: “A rigorous and robust quantum speedup in supervised machine learning.” The previously referenced papers by Troyer and Babbush, which seem to discount QML as a near-term application, but which actually don’t do that at all in any absolute sense, do nonetheless categorically identify classical data as a practically insurmountable obstacle for quantum computing, and thus QML, for the foreseeable future: “We immediately see that any problem that is limited by accessing classical data… will be solved faster by classical computers.” And yet, here we have a paper by Liu et al., (IBM and UC Berkeley), from 2020, clearly showing we encounter areas in which classical data access does not necessarily pose this insurmountable obstacle for QML that we are led to believe. In the precise words of the paper’s authors: “we establish a rigorous quantum speed-up for supervised classification using general-purpose quantum learning algorithm that only requires classical access to data.” There is no ambiguity in the paper, in that the goal was “to find one example of such a heuristic quantum machine learning algorithm, which given classical access to data can provably outperform all classical learners for some learning problem. In this paper, we answer this in the affirmative. We show that an exponential quantum speedup can be obtained via the use of a quantum-enhanced feature space.” The paper goes on to assert that its authors “prove an end-to-end quantum advantage… where [their] quantum classifier is guaranteed to achieve high accuracy for a classically hard classification problem.” (Using standard kernel method in Support Vector Machines.) Lastly, the paper indicates that this exponential speed up is applicable to generative AI: “evidences of an exponential quantum speedup were shown for a quantum generative model.” Thus, we see, while quantum artificial intelligence is hardly low hanging quantum fruit like computational chemistry, drug design cryptography, performance materials, or the simulation of nature, it is well within reach of near-term quantum computing if it is given enough focus, attention, and funding. And while the I/O bottleneck in loading classical data is by no means a piece of cake, its also a far cry from a lost cause in the foreseeable future. There is apt compelling evidence and continued progress indicating that we can indeed wrap our arms around this problem to the benefit of quantum machine learning and the bright future of quantum artificial intelligence. Classical data need not be the towering mountain some are making it out to appear to be. The picture is much rosier than that, with QML remaining a potential powerful application for quantum computing in the near-to-mid-term.

In sum, we may need greater than quadratic speedups for machine learning only if there are no quantum algorithmic or QEC improvements, which there absolutely must be for the field to advance. Either way, we already have compelling evidence of precisely such faster speedups for quantum machine learning. The above papers don’t dismiss machine learning training approaches unless their quantum speedups are quadratic or less. Yet we already have exponential quantum speedups in machine learning training and classification tasks. Lastly, classical data input bottlenecks are not only circumvented by Coecke’s categorical approach, and by use of quantum data, but are seeing improvements and results without the need for circumvention, such that they may soon no longer pose the obstacle they seem to hold today, especially once QRAM is developed. We are making steady progress with classical data. If we must go beyond quadratic speedups, or stick with small data problems, so be it, but we may not. In either case, quantum machine learning will nonetheless remain a promising viable contender for practical quantum advantage in the not-too-distant future.

(Tune in next time for further compelling evidence of quantum artificial intelligence.)

Shai Phillips

Retained Executive Search

Share this article:

Keep track of everything going on in the Quantum Technology Market.

In one place.

Related Articles

Join Our Newsletter