Guest Post: Why AI Regulation Won’t Work for Quantum

AI regulation
AI regulation

Guest Post by Charlene Flick
The post first appeared on LinkedIn

In an AI-obsessed world, quantum computing innovation plods along just beneath the surface in relative obscurity and largely confined to the domain of physicists and academia. But look beyond the promise of artificial intelligence and multiply it manyfold — you may find quantum. Examine the challenges that AI adoption creates and you may very well find a solution — quantum. Consider humanity’s aspirations for unprecedented innovation, curing disease, eliminating inequality, and reversing the demise of our planet — and therein lies the promise of quantum computing.

I say, “may” because quantum computing is still very much in its infancy, so we don’t know the real world impact for certainty, but it has great promise. And, although the combination of quantum computing and AI applications will change our world, we must resist the temptation to treat these distinct technologies the same from a regulatory perspective. We regulate emerging technology to mitigate harm and clear a path for innovation, but policymakers around the globe need to understand the distinct risks and opportunities these innovations bring so as to preserve this delicate balance, even when use cases are still evolving.

Responsive Image

Artificial intelligence extrapolates from massive amounts of data upon which it has been trained to see patterns and find solutions that humans might not necessarily happen upon. Given the enormity of data sets, it is often difficult and certainly less efficient for us to extract value in comparison to a program running artificial intelligence. Notably, while AI is limited to the data it is trained upon, quantum computing is not confined to what we know. Rather, it can interpret our physical world in unimaginable ways, potentially bestowing upon us new paradigms and new approaches in tackling heretofore insurmountable global challenges such as disease, climate, and beyond. This year, the quantum computing company D-Wave confirmed that its quantum computer, “performed the most complex simulation in minutes and with a level of accuracy that would take nearly one million years using the supercomputer.”² That sort of power, combined with the pattern recognition of artificial intelligence, can create opportunities for our species and planet akin to the stuff of science fiction!

Artificial intelligence regulation has been in the regulatory spotlight for the past seven to ten years and there is no shortage of governments and global institutions, as well as corporations and think tanks, putting forth regulatory frameworks in response to this widely buzzy tech. AI makes decisions in a “black box,” creating a need for “explainability” in order to fully understand how determinations by these systems affect the public. With the democratization of AI systems, there is the potential for bad actors to create harm in a decentralized ecosystem. Furthermore, AI has the ability to learn on its own, eventually morphing into systems we may not be able to control or understand in the not-so-distant future. That is why much of the legislation evolving around the globe seeks to address these harms by demanding “explainability,” transparency, substantiation, monitoring, and the categorization of potential harms from worst to least.

AI trained with flawed data sets will inevitably produce flawed results, as seen in biased outcomes tied to race, gender, or socioeconomic status. These results are often relied-upon for decision making by companies, governments, and law enforcement that directly impact citizens. And so, acknowledging these complexities introduced by various uses of artificial intelligence technologies anticipates harm in proportion to the severity of that harm, and this can be seen in the harm-based approach the European Commission has taken in crafting its comprehensive AI Act in 2024. The EU AI Act is the most comprehensive and far-reaching piece of legislation to date and it is extraterritorial in nature in that those wishing to avail themselves of European markets must also comply with this legislation even if they are headquartered outside of Europe.

National legislation around the world is still very much in flux, with the United States implementing a patchwork of state-level laws but no comprehensive federal regulation, the UK poised to pass AI legislation imminently, and Asia progressing rapidly through a variety of national strategies and sector-specific rules. This year, there has been a shift from addressing potential harms from implementing AI systems to focusing on an “AI Arms Race.” National interests aspiring to monopolize the many opportunities afforded by this emerging technology are overshadowing a more risk-based approach.³ Still, narrowly-tailored regulation is an important catalyst for innovation, and so it is important for nations to get this balance right in order to reap AI’s rewards.

Quantum is a different story. With its unstable “qubits” and talk of “error correction” and “entanglement,” quantum computing is not easily understood by many, much less regulators. Yet, a cursory understanding of quantum physics that powers these next-generation supercomputers is sufficient to then focus on societal benefits and potential pitfalls. Regulators need to understand how its potential harms differ from those posed by artificial intelligence, and they need to consult with those on the frontlines of innovation. It is a completely different technology with different attributes that will affect how responsible legislation should be crafted.

Quantum is an umbrella term that encompasses a range of technologies, including quantum computing, quantum sensing, and quantum communication. While much of the scientific focus is on stabilizing qubits, the quantum equivalent of today’s bits and bytes, regulators need to understand that a quantum computer does not evolve beyond its programming or control. Unlike AI, quantum computers do not evolve or improve unpredictably over time.

Because quantum systems do not learn on their own, evolve over time, or make decisions based on training data, they do not pose the same kind of existential or social threats that AI does. Whereas the implications of quantum breakthroughs will no doubt be profound, especially in cryptography, defense, drug development, and material science, the core risks are tied to who controls the technology and for what purpose. Regulating who controls technology and ensuring bad actors are disincentivized from using technology in harmful ways is the stuff of traditional regulation across many sectors, so regulating quantum should prove somewhat less challenging than current AI regulatory debates would suggest.

Furthermore, unlike the increasingly widespread availability of AI tools, quantum remains largely a centralized technology. As such, its potential harms are easier to contain, and those who control the technology are easier to identify and vet. From a lone coder in a Moscow bedroom using open-source AI for a startup, to a multinational corporation deploying AI plug-ins to streamline decision-making, artificial intelligence is rapidly becoming ubiquitous and increasingly cost-efficient. By contrast, the barriers to entry for working with quantum technologies remain extremely high. Beyond environmental constraints, such as the frigid temperatures required to stabilize highly sensitive qubits, quantum computing demands deep financial investment, advanced expertise, rigorous training, and cutting-edge lab infrastructure. It is not easy to become a quantum physicist, nor is it easy to assemble the kind of multidisciplinary team required to generate meaningful results. Certainly, it is not easy to build a stable, commercial, scalable quantum computer, as evidenced by the fact that there are none in 2025. Hybrid quantum-classical systems exist and experimental one-offs, but not the commercial, scalable solution society seeks.

Another key difference between the two technologies is that AI’s cybersecurity concerns differ fundamentally from those posed by quantum computing. Quantum computers’ ability to efficiently solve certain complex mathematical problems threatens current cryptographic systems protecting sensitive data, ranging from banking to law enforcement to national security. Thus, efforts are underway to develop and deploy “post-quantum” cryptography to prepare for breaches and compatible regulatory frameworks are being developed alongside it. By contrast, AI cybersecurity concerns center on the manipulation of content to mislead or skew results, leading to undesirable outcomes. Security and privacy remain critical policy considerations for emerging technologies that depend heavily on personally identifiable data, but risk assessments must be tailored to the underlying technology and specific use cases. Once again, this underscores how one size does not fit all.

At this point in time, quantum needs more government support than AI with respect to investment and workforce training. A Digital Divide is always a concern when dealing with a new technology that could place one part of the world on substantially better footing than the rest. Development and distribution of benefits from tech tend to be concentrated in wealthier nations with resources to allocate for research and investment. Regulation to enhance the equitable development and deployment of groundbreaking tech rather than focusing on the harms is especially welcomed in quantum, where it is and will be most needed.

Finally, while AI regulation, legislation, and frameworks continue to proliferate worldwide, there remains an opportunity to create a more cohesive regulatory regime for quantum through international cooperation, public-private sector partnerships, and the development of harmonized best practices. Many nations have published quantum roadmaps or policies, and several international organizations are actively leading the conversation, such as the Organisation for Economic Co-operation and Development (OECD), the National Institute of Standards and Technology (NIST) in the US, and the United Nations International Telecommunication Union (ITU).

With NVIDIA’s CEO, Jensen Huang, proclaiming last week that quantum is at “an inflection point”⁴ and the UN declaring 2025 the “International Year of Quantum Science and Technology,”⁵ there is growing awareness of quantum’s potential and greater incentive to bring everyone to the table now rather than later. Ventures in this ecosystem would do well to invest in specialists who can educate, inform, and advocate before policymakers as this nascent regulatory landscape develops. Without a seat at the table, quantum ventures risk ill-informed rules and regulation to which they will be bound for years to come, impacting their success. Quantum and AI, separately and together, are paradigm-shifting technologies, but they raise fundamentally different policy challenges that demand tailored approaches. When regulation is carefully crafted to address societal harms, spur investment, train workforces, and ensure equity, innovation is free to flourish — and we all win.

Charlene Flick is a technology lawyer and global public policy professional who served as the United States’ first Special Advisor for Intellectual Property, appointed by the President to the Department of State. She pivoted to global AI regulation advisory work in 2018 and is now excited to help ready the world for quantum.

Connect with her on LinkedIn:: http://linkedin.com/in/charlene-flick-05792

_________________________________________________________________________________________________

Notes

¹ This article was written by a human, Charlene B. Flick, Esq. All em-dashes are my own.

² “Beyond Classical: D-Wave First to Demonstrate Quantum Supremacy on Useful, Real-World Problem,” D-Wave Press Release, https://www.dwavequantum.com/company/newsroom/press-release/beyond-classical-d-wave-first-to-demonstrate-quantum-supremacy-on-useful-real-world-problem/?utm_source=chatgpt.com (March 12, 2025); “Beyond-Classical Computation in Quantum Simulation,” Science (March 2025).

³ “JD Vance rails against ‘excessive’ AI regulation in a rebuke to Europe at the Paris AI summit,” Associated Press, https://apnews.com/article/paris-ai-summit-vance-1d7826affdcdb76c580c0558af8d68d2 (February 11, 2025) (noting, “ U.S. Vice President JD Vance on Tuesday warned global leaders and tech industry executives that “excessive regulation” could cripple the rapidly growing artificial intelligence industry in a rebuke to European efforts to curb AI’s risks.”)

⁴ NVIDIA CEO Jensen Huang Live GTC Paris Keynote at VivaTech 2025, https://www.youtube.com/watch?v=X9cHONwKkn4 (June 11, 2025).

⁵ United Nations General Assembly, Resolution 78/287: International Year of Quantum Science and Technology (June 7, 2024).

Resonance

The Quantum Insider is the leading online resource dedicated exclusively to Quantum Computing. You can contact us at [email protected].

Share this article:

Keep track of everything going on in the Quantum Technology Market.

In one place.

Related Articles