Hyperparameter optimisation in deep learning from ensemble methods: applications to proton structure

Deep learning models are defined in terms of a large number of hyperparameters, such as network architectures and optimiser settings. These hyperparameters must be determined separately from the model parameters such as network weights, and are often fixed by ad-hoc methods or by manual inspection o...

Full description

Saved in:
Bibliographic Details
Main Authors: Juan Cruz-Martinez, Aron Jansen, Gijs van Oord, Tanjona R Rabemananjara, Carlos M R Rocha, Juan Rojo, Roy Stegeman
Format: Article
Language:English
Published: IOP Publishing 2025-01-01
Series:Machine Learning: Science and Technology
Subjects:
Online Access:https://doi.org/10.1088/2632-2153/adcd39
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849311506571722752
author Juan Cruz-Martinez
Aron Jansen
Gijs van Oord
Tanjona R Rabemananjara
Carlos M R Rocha
Juan Rojo
Roy Stegeman
author_facet Juan Cruz-Martinez
Aron Jansen
Gijs van Oord
Tanjona R Rabemananjara
Carlos M R Rocha
Juan Rojo
Roy Stegeman
author_sort Juan Cruz-Martinez
collection DOAJ
description Deep learning models are defined in terms of a large number of hyperparameters, such as network architectures and optimiser settings. These hyperparameters must be determined separately from the model parameters such as network weights, and are often fixed by ad-hoc methods or by manual inspection of the results. An algorithmic, objective determination of hyperparameters demands the introduction of dedicated target metrics, different from those adopted for the model training. Here we present a new approach to the automated determination of hyperparameters in deep learning models based on statistical estimators constructed from a ensemble of models sampling the underlying probability distribution in model space. This strategy requires the simultaneous parallel training of up to several hundreds of models and can be effectively implemented by deploying hardware accelerators such as graphical processing units (GPUs). As a proof-of-concept, we apply this method to the determination of the partonic substructure of the proton within the NNPDF framework and demonstrate the robustness of the resultant model uncertainty estimates. The new GPU-optimised NNPDF code results in a speed-up of up to two orders of magnitude, a stabilisation of the memory requirements, and a reduction in energy consumption of up to 90% as compared to sequential CPU-based model training. While focusing on proton structure, our method is fully general and is applicable to any deep learning problem relying on hyperparameter optimisation for an ensemble of models.
format Article
id doaj-art-fba0e9b8d0d446df8a59c826129e0055
institution Kabale University
issn 2632-2153
language English
publishDate 2025-01-01
publisher IOP Publishing
record_format Article
series Machine Learning: Science and Technology
spelling doaj-art-fba0e9b8d0d446df8a59c826129e00552025-08-20T03:53:22ZengIOP PublishingMachine Learning: Science and Technology2632-21532025-01-016202502710.1088/2632-2153/adcd39Hyperparameter optimisation in deep learning from ensemble methods: applications to proton structureJuan Cruz-Martinez0https://orcid.org/0000-0002-8061-1965Aron Jansen1Gijs van Oord2Tanjona R Rabemananjara3https://orcid.org/0000-0002-8395-8059Carlos M R Rocha4https://orcid.org/0000-0002-4118-8308Juan Rojo5https://orcid.org/0000-0003-4279-2192Roy Stegeman6https://orcid.org/0000-0002-3852-8009Theoretical Physics Department, CERN , CH-1211 Geneva 23, SwitzerlandNetherlands eScience Center , Science Park 140, 1098 XG Amsterdam, The NetherlandsNetherlands eScience Center , Science Park 140, 1098 XG Amsterdam, The NetherlandsDepartment of Physics and Astronomy, Vrije Universiteit , NL-1081 HV Amsterdam, The Netherlands; Nikhef Theory Group , Science Park 105, 1098 XG Amsterdam, The NetherlandsNetherlands eScience Center , Science Park 140, 1098 XG Amsterdam, The NetherlandsTheoretical Physics Department, CERN , CH-1211 Geneva 23, Switzerland; Department of Physics and Astronomy, Vrije Universiteit , NL-1081 HV Amsterdam, The Netherlands; Nikhef Theory Group , Science Park 105, 1098 XG Amsterdam, The NetherlandsThe Higgs Centre for Theoretical Physics, University of Edinburgh , JCMB, KB, Mayfield Rd, Edinburgh EH9 3FD, United KingdomDeep learning models are defined in terms of a large number of hyperparameters, such as network architectures and optimiser settings. These hyperparameters must be determined separately from the model parameters such as network weights, and are often fixed by ad-hoc methods or by manual inspection of the results. An algorithmic, objective determination of hyperparameters demands the introduction of dedicated target metrics, different from those adopted for the model training. Here we present a new approach to the automated determination of hyperparameters in deep learning models based on statistical estimators constructed from a ensemble of models sampling the underlying probability distribution in model space. This strategy requires the simultaneous parallel training of up to several hundreds of models and can be effectively implemented by deploying hardware accelerators such as graphical processing units (GPUs). As a proof-of-concept, we apply this method to the determination of the partonic substructure of the proton within the NNPDF framework and demonstrate the robustness of the resultant model uncertainty estimates. The new GPU-optimised NNPDF code results in a speed-up of up to two orders of magnitude, a stabilisation of the memory requirements, and a reduction in energy consumption of up to 90% as compared to sequential CPU-based model training. While focusing on proton structure, our method is fully general and is applicable to any deep learning problem relying on hyperparameter optimisation for an ensemble of models.https://doi.org/10.1088/2632-2153/adcd39hyperoptimisationmachine learninghardware accelerationproton structureparton distribution functionsGPU
spellingShingle Juan Cruz-Martinez
Aron Jansen
Gijs van Oord
Tanjona R Rabemananjara
Carlos M R Rocha
Juan Rojo
Roy Stegeman
Hyperparameter optimisation in deep learning from ensemble methods: applications to proton structure
Machine Learning: Science and Technology
hyperoptimisation
machine learning
hardware acceleration
proton structure
parton distribution functions
GPU
title Hyperparameter optimisation in deep learning from ensemble methods: applications to proton structure
title_full Hyperparameter optimisation in deep learning from ensemble methods: applications to proton structure
title_fullStr Hyperparameter optimisation in deep learning from ensemble methods: applications to proton structure
title_full_unstemmed Hyperparameter optimisation in deep learning from ensemble methods: applications to proton structure
title_short Hyperparameter optimisation in deep learning from ensemble methods: applications to proton structure
title_sort hyperparameter optimisation in deep learning from ensemble methods applications to proton structure
topic hyperoptimisation
machine learning
hardware acceleration
proton structure
parton distribution functions
GPU
url https://doi.org/10.1088/2632-2153/adcd39
work_keys_str_mv AT juancruzmartinez hyperparameteroptimisationindeeplearningfromensemblemethodsapplicationstoprotonstructure
AT aronjansen hyperparameteroptimisationindeeplearningfromensemblemethodsapplicationstoprotonstructure
AT gijsvanoord hyperparameteroptimisationindeeplearningfromensemblemethodsapplicationstoprotonstructure
AT tanjonarrabemananjara hyperparameteroptimisationindeeplearningfromensemblemethodsapplicationstoprotonstructure
AT carlosmrrocha hyperparameteroptimisationindeeplearningfromensemblemethodsapplicationstoprotonstructure
AT juanrojo hyperparameteroptimisationindeeplearningfromensemblemethodsapplicationstoprotonstructure
AT roystegeman hyperparameteroptimisationindeeplearningfromensemblemethodsapplicationstoprotonstructure