Reducing the Parameter Dependency of Phase-Picking Neural Networks with Dice Loss
Training a neural network for picking seismic phase arrivals has been commonly posed as a segmentation problem. It is a highly imbalanced segmentation problem in the sense that the background vastly dominates the foreground because we are trying to pick the optimal single sample point that represent...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Seismological Society of America
2025-01-01
|
| Series: | The Seismic Record |
| Online Access: | https://doi.org/10.1785/0320240028 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Training a neural network for picking seismic phase arrivals has been commonly posed as a segmentation problem. It is a highly imbalanced segmentation problem in the sense that the background vastly dominates the foreground because we are trying to pick the optimal single sample point that represents the arrival of a seismic phase in a many seconds long time window. Here, we test the Dice loss, which is a preferred loss function for highly imbalanced image segmentation problems. We show that phase-picking neural networks trained on the Dice loss behave in a binary fashion for which the prediction output is almost always either nearly 1 or nearly 0. This feature removes the strong dependence of data processing workflows on the prediction score threshold, which is an otherwise critical parameter to determine when using neural networks trained on the cross-entropy loss. When strategically used, models trained on the Dice loss can reduce the parameter dependency of machine learning-based seismic monitoring. |
|---|---|
| ISSN: | 2694-4006 |