An Analysis of Vectorised Automatic Differentiation for Statistical Applications
Automatic differentiation (AD) is a general method for computing exact derivatives in complex sensitivity analyses and optimisation tasks, particularly when closed-form solutions are unavailable and traditional analytical or numerical methods fall short. This paper introduces a vectorised formulatio...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-05-01
|
| Series: | Stats |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2571-905X/8/2/40 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849431674922729472 |
|---|---|
| author | Chun Fung Kwok Dan Zhu Liana Jacobi |
| author_facet | Chun Fung Kwok Dan Zhu Liana Jacobi |
| author_sort | Chun Fung Kwok |
| collection | DOAJ |
| description | Automatic differentiation (AD) is a general method for computing exact derivatives in complex sensitivity analyses and optimisation tasks, particularly when closed-form solutions are unavailable and traditional analytical or numerical methods fall short. This paper introduces a vectorised formulation of AD grounded in matrix calculus. It aligns naturally with the matrix-oriented style prevalent in statistics, supports convenient implementations, and takes advantage of sparse matrix representation and other high-level optimisation techniques that are not available in the scalar counterpart. Our formulation is well-suited to high-dimensional statistical applications, where finite differences (FD) scale poorly due to the need to repeat computations for each input dimension, resulting in significant overhead, and is advantageous in simulation-intensive settings—such as Markov Chain Monte Carlo (MCMC)-based inference—where FD requires repeated sampling and multiple function evaluations, while AD can compute exact derivatives in a single pass, substantially reducing computational cost. Numerical studies are presented to demonstrate the efficacy and speed of the proposed AD method compared with FD schemes. |
| format | Article |
| id | doaj-art-249342c991ec486ab1a147d93130553e |
| institution | Kabale University |
| issn | 2571-905X |
| language | English |
| publishDate | 2025-05-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Stats |
| spelling | doaj-art-249342c991ec486ab1a147d93130553e2025-08-20T03:27:33ZengMDPI AGStats2571-905X2025-05-01824010.3390/stats8020040An Analysis of Vectorised Automatic Differentiation for Statistical ApplicationsChun Fung Kwok0Dan Zhu1Liana Jacobi2St. Vincent’s Institute of Medical Research, Melbourne 3065, AustraliaDepartment of Econometrics and Business Statistics, Monash University, Melbourne 3800, AustraliaDepartment of Economics, University of Melbourne, Melbourne 3010, AustraliaAutomatic differentiation (AD) is a general method for computing exact derivatives in complex sensitivity analyses and optimisation tasks, particularly when closed-form solutions are unavailable and traditional analytical or numerical methods fall short. This paper introduces a vectorised formulation of AD grounded in matrix calculus. It aligns naturally with the matrix-oriented style prevalent in statistics, supports convenient implementations, and takes advantage of sparse matrix representation and other high-level optimisation techniques that are not available in the scalar counterpart. Our formulation is well-suited to high-dimensional statistical applications, where finite differences (FD) scale poorly due to the need to repeat computations for each input dimension, resulting in significant overhead, and is advantageous in simulation-intensive settings—such as Markov Chain Monte Carlo (MCMC)-based inference—where FD requires repeated sampling and multiple function evaluations, while AD can compute exact derivatives in a single pass, substantially reducing computational cost. Numerical studies are presented to demonstrate the efficacy and speed of the proposed AD method compared with FD schemes.https://www.mdpi.com/2571-905X/8/2/40automatic differentiationderivative computationmatrix calculusMCMCMLEoptimisation |
| spellingShingle | Chun Fung Kwok Dan Zhu Liana Jacobi An Analysis of Vectorised Automatic Differentiation for Statistical Applications Stats automatic differentiation derivative computation matrix calculus MCMC MLE optimisation |
| title | An Analysis of Vectorised Automatic Differentiation for Statistical Applications |
| title_full | An Analysis of Vectorised Automatic Differentiation for Statistical Applications |
| title_fullStr | An Analysis of Vectorised Automatic Differentiation for Statistical Applications |
| title_full_unstemmed | An Analysis of Vectorised Automatic Differentiation for Statistical Applications |
| title_short | An Analysis of Vectorised Automatic Differentiation for Statistical Applications |
| title_sort | analysis of vectorised automatic differentiation for statistical applications |
| topic | automatic differentiation derivative computation matrix calculus MCMC MLE optimisation |
| url | https://www.mdpi.com/2571-905X/8/2/40 |
| work_keys_str_mv | AT chunfungkwok ananalysisofvectorisedautomaticdifferentiationforstatisticalapplications AT danzhu ananalysisofvectorisedautomaticdifferentiationforstatisticalapplications AT lianajacobi ananalysisofvectorisedautomaticdifferentiationforstatisticalapplications AT chunfungkwok analysisofvectorisedautomaticdifferentiationforstatisticalapplications AT danzhu analysisofvectorisedautomaticdifferentiationforstatisticalapplications AT lianajacobi analysisofvectorisedautomaticdifferentiationforstatisticalapplications |