Comparing Gradient Boosting and Neural Networks in Predicting Age Based on Coronal Pulp Height from Panoramic Radiographs – A Retrospective Radiographic Study
Background: Age estimation is the process of establishing an individual’s age using biological indicators. It is extremely important in many domains, including forensic science, anthropology, and legal medicine. Objectives: To compare the predictive accuracy and efficacy of gradient boosting and neu...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wolters Kluwer Medknow Publications
2025-01-01
|
| Series: | Journal of Indian Academy of Oral Medicine and Radiology |
| Subjects: | |
| Online Access: | https://journals.lww.com/10.4103/jiaomr.jiaomr_332_24 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Background:
Age estimation is the process of establishing an individual’s age using biological indicators. It is extremely important in many domains, including forensic science, anthropology, and legal medicine.
Objectives:
To compare the predictive accuracy and efficacy of gradient boosting and neural network models in estimating chronological age from coronal pulp height measurements using panoramic radiographs.
Methods:
Digital dental panoramic radiographs were obtained from institutional databases. The age estimation was done using the first molar as the reference tooth manually. For the prediction of age based on tooth coronal index (TCI) values, two machine learning models were developed: gradient boosting and neural networks. Hyperparameter tuning was performed to optimize the model’s performance, ensuring that it could accurately predict age from the TCI values. The models were tested for sensitivity and specificity.
Results:
The area under curve (AUC) values for neural network and gradient boosting are 0.821 and 0.959, respectively. Gradient boosting’s AUC of 0.959 indicates near-flawless classification ability, whereas the neural network’s 0.821 points to weaker performance. Gradient boosting has a classification accuracy of 0.765, significantly higher than the neural network’s 0.529, showing that gradient boosting makes fewer prediction errors.
Conclusion:
Gradient boosting excels in interpretability and efficiency with smaller datasets, in generalization. In contrast, neural networks are capable of modeling complex relationships within high-dimensional data but may require more resources and training for optimal performance. |
|---|---|
| ISSN: | 0972-1363 0975-1572 |