Trade-off between gradient measurement efficiency and expressivity in deep quantum neural networks

Abstract Quantum neural networks (QNNs) require an efficient training algorithm to achieve practical quantum advantages. A promising approach is gradient-based optimization, where gradients are estimated by quantum measurements. However, QNNs currently lack general quantum algorithms for efficiently...

Full description

Saved in:
Bibliographic Details
Main Authors: Koki Chinzei, Shinichiro Yamano, Quoc Hoan Tran, Yasuhiro Endo, Hirotaka Oshima
Format: Article
Language:English
Published: Nature Portfolio 2025-05-01
Series:npj Quantum Information
Online Access:https://doi.org/10.1038/s41534-025-01036-7
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Quantum neural networks (QNNs) require an efficient training algorithm to achieve practical quantum advantages. A promising approach is gradient-based optimization, where gradients are estimated by quantum measurements. However, QNNs currently lack general quantum algorithms for efficiently measuring gradients, which limits their scalability. To elucidate the fundamental limits and potentials of efficient gradient estimation, we rigorously prove a trade-off between gradient measurement efficiency (the mean number of simultaneously measurable gradient components) and expressivity in deep QNNs. This trade-off indicates that more expressive QNNs require higher measurement costs per parameter for gradient estimation, while reducing QNN expressivity to suit a given task can increase gradient measurement efficiency. We further propose a general QNN ansatz called the stabilizer-logical product ansatz (SLPA), which achieves the trade-off upper bound by exploiting the symmetric structure of the quantum circuit. Numerical experiments show that the SLPA drastically reduces the sample complexity needed for training while maintaining accuracy and trainability compared to well-designed circuits based on the parameter-shift method.
ISSN:2056-6387