Researchers Use AI to Expose Hidden Webs of Entanglement

AI entanglement
AI entanglement
IAC IAC

Insider Brief

  • Researchers used machine learning to detect hidden entanglement created by quantum measurements, offering a scalable alternative to traditional methods.
  • Experiments on Google’s Sycamore and Willow quantum processors showed that neural networks could identify measurement-induced entanglement in one- and two-dimensional qubit arrays.
  • The approach may inform future quantum error correction schemes by revealing how measurement outcomes affect fragile qubits.

A team of physicists and computer scientists has shown that artificial intelligence can detect hidden webs of quantum entanglement created by measurement, a result they say could spur new error correction techniques and reshape how researchers study and control quantum machines.

The research, published in the preprint server arXiv by scientists from the University of California San Diego, UC Berkeley, Princeton, Purdue, Lawrence Berkeley National Laboratory, and Google Research, demonstrates a new method for observing “measurement-induced entanglement,” a counterintuitive effect in which simply measuring part of a quantum system creates connections among particles that were never directly linked.

According to the researchers, this is important because entanglement is the lifeblood of quantum computing, allowing qubits — the quantum counterpart of digital bits — to correlate across long distances. But when many measurements are made, the outcomes are random, and the entanglement patterns are difficult to track.

Responsive Image

“Measurements are essential for the processing and protection of information in quantum computers,” the researchers write. “They can also induce long-range entanglement between unmeasured qubits. However, when post-measurement states depend on many non-deterministic measurement outcomes, there is a barrier to observing and using the entanglement induced by prior measurements.”

Traditionally, scientists had to repeat experiments exponentially many times, an impractical approach for large quantum systems. This new study demonstrates that machine learning models can uncover these elusive connections by training directly on the raw data of quantum experiments. According to the authors, this approach allows researchers to bypass the need for advance knowledge of how the quantum state was prepared or the costly process of postselecting data.

‘Cluster States’

The team prepared entangled “cluster states,” which is an arrangement of qubits entangled in a grid or chain, on Google’s superconducting quantum processors, including the Sycamore device — famous for Google’s quantum supremacy experiment in 2019 — and a newer 105-qubit processor named Willow. By measuring nearly all qubits and leaving two unmeasured “probes,” they tried to see if those two distant qubits became entangled as a result of the intermediate measurements.

They found that neural networks trained in an unsupervised fashion — without labeled data — were able to reconstruct models of the probe qubits’ states. Cross-checking the AI predictions with experimental data revealed the presence of measurement-induced entanglement across both one-dimensional chains of 34 qubits and two-dimensional grids of 36 qubits.

The study further showed that as researchers tuned the measurement settings, the system passed through a sharp threshold: in some regimes, the neural networks could successfully learn and reveal entanglement; in others, they failed, generating flat, featureless predictions. This pattern corresponds to a “measurement-induced phase transition,” a newly recognized category of transitions in quantum systems.

Scaling Quantum, Advancing Error Correction

The results carry several implications for quantum technology. For example, the researchers suggest the experiment could lead to a scalable way to study collective effects of measurement on large quantum systems. In other words, researchers could move beyond toy systems and start learning how measurement-driven effects, like entanglement and error propagation, play out in the large devices that will matter for real-world quantum computing.

The approach also may influence the design of quantum error correction schemes, according to the team. Quantum error correction depends on making constant measurements to catch mistakes in fragile qubits, but those measurements generate random outcomes that are notoriously hard to interpret. The researchers say their machine learning approach, which teases out hidden entanglement from such data, could be adapted to design smarter error correction schemes that make quantum computers more stable.

The researchers report that their work can act as a basis for future experiments on quantum error correction and more general problems in quantum control. By showing that AI can learn from measurement outcomes alone, the study suggests that machine learning could become an indispensable companion to quantum experiments, spotting patterns too complex for classical calculations or human intuition.

How The Researchers Designed The Experiment

The researchers prepared short-range entangled cluster states using carefully designed circuits of superconducting qubits. In the one-dimensional case, they used 34 qubits arranged in a line. In the two-dimensional case, they prepared a 6×6 grid of 36 qubits.

After preparing the states, they measured all but two probe qubits. Because the outcomes vary randomly, each experiment produced a different post-measurement state. Rather than trying to repeat experiments until rare outcomes aligned — a standard but inefficient method — the team trained machine learning models to map measurement outcomes onto predictions of the probe qubits’ states.

The machine learning models included generative neural networks with attention mechanisms inspired by language models such as BERT, as well as tensor network models optimized variationally. Training was unsupervised, meaning the models only used the experimental outcomes without needing labels or prior knowledge of the quantum state preparation.

To validate the models, the researchers used a technique called “classical shadows,” which involves applying random unitary operations to probe qubits before measuring them. By correlating the AI’s predictions with these shadows, they were able to place bounds on the entanglement and entropy of the probe qubits.

Limitations

The authors caution that their models are imperfect and that bounds on entanglement are sensitive to noise and decoherence in the quantum devices. In some regimes, particularly when measurement bases were strongly tuned, the neural networks failed to capture existing entanglement, even though gate-based models — simulations using knowledge of the circuit — showed it should be present.

They also report that while their method avoids exponential overhead, it still relies on collecting very large datasets. For the two-dimensional case, training required on the order of 78 million experimental repeats, though this was still vastly smaller than the astronomical number of all possible outcomes.

Future Directions

Despite limitations, the researchers argue that the method opens a pathway to studying measurement-driven quantum phenomena in more realistic settings. They envision applying the approach to ultracold atoms and molecules, where high-resolution imaging provides data but precise knowledge of quantum state preparation is limited.

As mentioned, one major near-term target is quantum error correction. Past demonstrations have relied on stabilizer codes — systems with known algebraic structures that map measurement outcomes directly to errors. But the real world is more complicated. By training on raw data, machine learning models may allow error correction to extend to less controlled, more realistic environments.

The researchers also suggest that their approach could be applied to study exotic phases of quantum matter that emerge under repeated measurement, such as topologically ordered states. These phases are believed to be useful for robust quantum computation, but until now have been difficult to probe without heavy postselection.

Overall, the study underscores a growing convergence between artificial intelligence and quantum computing. Just as machine learning has transformed fields from image recognition to drug discovery, it is now becoming a tool to understand the quantum world. By treating the messy, random outcomes of measurement as a dataset to be mined, AI provides a way to see order in apparent noise.

The bigger picture, according to the researchers, is that these results as part of a broader lesson about the interplay between learning and observation in quantum science.

The team writes: “Our experiments demonstrate how we can observe the effects of large numbers of measurements on a quantum system. A key initial step is to learn how to infer the effects of measurements on unmeasured quantum degrees of freedom. Once an approximate model for the system is generated from training data, cross-correlations between the model and an independent dataset can be used to construct bounds on properties of measured quantum states. These protocols
highlight a deep connection between our ability to learn how to model a quantum system, and our ability to observe the effects of measurements.”

The study was conducted by Wanda Hou and Yi-Zhuang You of the University of California, San Diego; Samuel J. Garratt and Ehud Altman of the University of California, Berkeley and Lawrence Berkeley National Laboratory; Garratt is also affiliated with Princeton University. Norhan M. Eassa, Eliott Rosenberg and Pedram Roushan are with Google Research, with Eassa also affiliated with Purdue University.

For a deeper, more technical dive, please review the paper on arXiv. It’s important to note that arXiv is a pre-print server, which allows researchers to receive quick feedback on their work. However, it is not — nor is this article, itself — official peer-review publications. Peer-review is an important step in the scientific process to verify results.

Matt Swayne

With a several-decades long background in journalism and communications, Matt Swayne has worked as a science communicator for an R1 university for more than 12 years, specializing in translating high tech and deep tech for the general audience. He has served as a writer, editor and analyst at The Quantum Insider since its inception. In addition to his service as a science communicator, Matt also develops courses to improve the media and communications skills of scientists and has taught courses. [email protected]

Share this article:

Keep track of everything going on in the Quantum Technology Market.

In one place.

Related Articles