Multimodal adaptive temporal phase unwrapping using deep learning and physical priors
In optical 3D measurement, temporal phase unwrapping (TPU) is widely employed in fringe projection and interferometry, as it is crucial for resolving wrapped phase ambiguities and obtaining absolute phase distributions. Recently, deep learning has significantly enhanced TPU performance, particularly...
Saved in:
| Main Authors: | , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
AIP Publishing LLC
2025-04-01
|
| Series: | APL Photonics |
| Online Access: | http://dx.doi.org/10.1063/5.0252363 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | In optical 3D measurement, temporal phase unwrapping (TPU) is widely employed in fringe projection and interferometry, as it is crucial for resolving wrapped phase ambiguities and obtaining absolute phase distributions. Recently, deep learning has significantly enhanced TPU performance, particularly in noise robustness. However, existing deep learning-based TPU methods often struggle with generalization, as they typically assume that training and testing data share the same distribution, such as maintaining a constant spatial frequency for fringes in both training and testing processes. When fringe patterns become sparser or denser, the phase unwrapping accuracy declines sharply. Moreover, conventional learning-based methods develop deep neural networks (DNNs) that operate in a single modality, meaning that once the training is complete, the DNN can only perform a specific TPU algorithm. If other TPU methods need to be characterized, the DNN must be retrained, which is a time-consuming process. To address these challenges, we propose for the first time a deep learning-based multimodal adaptive TPU method that integrates prior information obtained by mathematical models of TPU. This approach allows a trained DNN to effectively perform multi-frequency TPU, multi-wavelength TPU, and number-theoretic TPU at the same time while adaptively processing unseen fringes from diverse systems. Experimental results demonstrate that while a U-Net-based TPU method nearly fails with varying test fringes, our method maintains a high accuracy of ∼96%. This work offers a novel perspective for developing robust, generalizable AI-driven optical metrology techniques. |
|---|---|
| ISSN: | 2378-0967 |