In recent years, computer scientists have developed advanced computational tools, such as neural quantum states (NQSs), to tackle challenges in quantum physics. NQSs encode the wavefunction of a quantum system into artificial neural networks, offering a powerful approach to predict ground states of quantum systems. However, their effectiveness has been hampered by the lack of optimization algorithms suitable for training these models on complex quantum many-body problems.
Researchers at the University of Augsburg have addressed this limitation by introducing a new stochastic-reconfiguration optimization algorithm, capable of training deep neural quantum networks with up to 1 million parameters. This discovery, published in Nature Physics, enabled them to accurately compute the ground state of a quantum spin liquid (QSL) using an NQS.
“Our paper focuses on the NQS method initially proposed in 2017,” Ao Chen, co-author of the paper, told Phys.org in a recent interview. “The community of computational quantum physics was initially excited about the idea of representing quantum states with neural networks and hoped NQS could produce novel insights into quantum many-body problems. However, people gradually realized the difficulty of making NQS better than existing methods.”
The advantage of NQSs lies in their extensive artificial neuron connections, which can be further amplified by increasing the network’s size. The Augsburg team’s simplified training formula allowed for a significantly larger NQS, approximately 100 times bigger than previous models, leading to remarkable results. Their work not only enhances the capability of NQS techniques in predicting the properties of interacting quantum many-body systems but also introduces a crucial optimization algorithm to improve their training.