Evaluating DL Model Scaling Trade-Offs During Inference via an Empirical Benchmark Analysis

With generative Artificial Intelligence (AI) capturing public attention, the appetite of the technology sector for larger and more complex Deep Learning (DL) models is continuously growing. Traditionally, the focus in DL model development has been on scaling the neural network’s foundational structu...

Full description

Saved in:
Bibliographic Details
Main Authors: Demetris Trihinas, Panagiotis Michael, Moysis Symeonides
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:Future Internet
Subjects:
Online Access:https://www.mdpi.com/1999-5903/16/12/468
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With generative Artificial Intelligence (AI) capturing public attention, the appetite of the technology sector for larger and more complex Deep Learning (DL) models is continuously growing. Traditionally, the focus in DL model development has been on scaling the neural network’s foundational structure to increase computational complexity and enhance the representational expressiveness of the model. However, with recent advancements in edge computing and 5G networks, DL models are now aggressively being deployed and utilized across the cloud–edge–IoT continuum for the realization of in situ intelligent IoT services. This paradigm shift introduces a growing need for AI practitioners, as a focus on inference costs, including latency, computational overhead, and energy efficiency, is long overdue. This work presents a benchmarking framework designed to assess DL model scaling across three key performance axes during model inference: classification accuracy, computational overhead, and latency. The framework’s utility is demonstrated through an empirical study involving various model structures and variants, as well as publicly available datasets for three popular DL use cases covering natural language understanding, object detection, and regression analysis.
ISSN:1999-5903