Efficient microstructure segmentation in three-dimensional imaging: Combining few-shot learning with the segment anything modelEarth/Chem

The application of three-dimensional (3D) imaging techniques, such as X-ray tomography and focussed ion beam scanning electron microscopy (FIB-SEM), is increasingly widespread in microstructural analysis of natural materials. However, our ability to collect high-resolution tomographic datasets, each...

Full description

Saved in:
Bibliographic Details
Main Authors: Po-Yen Tung, Richard J. Harrison
Format: Article
Language:English
Published: Elsevier 2025-07-01
Series:Next Materials
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2949822825001819
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The application of three-dimensional (3D) imaging techniques, such as X-ray tomography and focussed ion beam scanning electron microscopy (FIB-SEM), is increasingly widespread in microstructural analysis of natural materials. However, our ability to collect high-resolution tomographic datasets, each comprising thousands of two-dimensional (2D) images with millions of pixels, far outstrips our ability to analyse them. Pixel-level segmentation of each 2D image is the first step in any analysis pipeline, but creates a considerable human bottleneck in the workflow that can now be overcome using machine learning. Although advanced pre-trained models such as the Segment Anything Model (SAM) have emerged, conventional segmentation workflows for 3D tomographic data remain limited in comparison. To tackle this, we propose a machine learning workflow that combines SAM with a few-shot learning framework, automating segmentation and minimising user bias. Using SAM, we generate precise annotations from a limited subset of 2D images through basic input prompts, such as points and boxes. These annotations serve as the training data for the few-shot learning model. We benchmark this workflow using a complex 3D FIB-SEM tomographic dataset of the C2 ungrouped carbonaceous chondrite WIS91600. With only 0.6 % of the training data, our method achieves an intersection over union (IoU) score of 80.62 % compared to the ground truth, significantly outperforming widely used methods that achieve a maximum IoU score of 67.07 %. The strong performance on the challenging meteorite dataset highlights its potential for broader application across materials and imaging modalities.
ISSN:2949-8228