Regularizing Data for Improving Execution Time of NLP Model

Natural language processing (NLP) is a very important part of machine learning that can be applied to different real applications. Several NLP models with huge training datasets are proposed. The primary purpose of these large-scale NLP models is the downstream tasks. However, because of the diversi...

Full description

Saved in:
Bibliographic Details
Main Authors: Thang Dang, Yasufumi Sakai, Tsuguchika Tabaru, Akihiko Kasagi
Format: Article
Language:English
Published: LibraryPress@UF 2022-05-01
Series:Proceedings of the International Florida Artificial Intelligence Research Society Conference
Online Access:https://journals.flvc.org/FLAIRS/article/view/130672
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849763296089997312
author Thang Dang
Yasufumi Sakai
Tsuguchika Tabaru
Akihiko Kasagi
author_facet Thang Dang
Yasufumi Sakai
Tsuguchika Tabaru
Akihiko Kasagi
author_sort Thang Dang
collection DOAJ
description Natural language processing (NLP) is a very important part of machine learning that can be applied to different real applications. Several NLP models with huge training datasets are proposed. The primary purpose of these large-scale NLP models is the downstream tasks. However, because of the diversity and rapidly increasing the size of these datasets, they consume a lot of resources and time. In this study, we propose a state-of-the-art method to reduce the training time of NLP models on downstream tasks while maintaining accuracy. Our method focuses on removing unimportant data from the input data set and optimizing the padding of tokens to reduce the processing time for the NLP model. Experiments are conducted on many different GLUE benchmark datasets demonstrated that our method can reduce the most up to 57% in training time compared to other methods.
format Article
id doaj-art-ff4a2657132d4e04839c8ba721b41f8a
institution DOAJ
issn 2334-0754
2334-0762
language English
publishDate 2022-05-01
publisher LibraryPress@UF
record_format Article
series Proceedings of the International Florida Artificial Intelligence Research Society Conference
spelling doaj-art-ff4a2657132d4e04839c8ba721b41f8a2025-08-20T03:05:26ZengLibraryPress@UFProceedings of the International Florida Artificial Intelligence Research Society Conference2334-07542334-07622022-05-013510.32473/flairs.v35i.13067266871Regularizing Data for Improving Execution Time of NLP ModelThang Dang0Yasufumi SakaiTsuguchika TabaruAkihiko KasagiFujitsu LimitedNatural language processing (NLP) is a very important part of machine learning that can be applied to different real applications. Several NLP models with huge training datasets are proposed. The primary purpose of these large-scale NLP models is the downstream tasks. However, because of the diversity and rapidly increasing the size of these datasets, they consume a lot of resources and time. In this study, we propose a state-of-the-art method to reduce the training time of NLP models on downstream tasks while maintaining accuracy. Our method focuses on removing unimportant data from the input data set and optimizing the padding of tokens to reduce the processing time for the NLP model. Experiments are conducted on many different GLUE benchmark datasets demonstrated that our method can reduce the most up to 57% in training time compared to other methods.https://journals.flvc.org/FLAIRS/article/view/130672
spellingShingle Thang Dang
Yasufumi Sakai
Tsuguchika Tabaru
Akihiko Kasagi
Regularizing Data for Improving Execution Time of NLP Model
Proceedings of the International Florida Artificial Intelligence Research Society Conference
title Regularizing Data for Improving Execution Time of NLP Model
title_full Regularizing Data for Improving Execution Time of NLP Model
title_fullStr Regularizing Data for Improving Execution Time of NLP Model
title_full_unstemmed Regularizing Data for Improving Execution Time of NLP Model
title_short Regularizing Data for Improving Execution Time of NLP Model
title_sort regularizing data for improving execution time of nlp model
url https://journals.flvc.org/FLAIRS/article/view/130672
work_keys_str_mv AT thangdang regularizingdataforimprovingexecutiontimeofnlpmodel
AT yasufumisakai regularizingdataforimprovingexecutiontimeofnlpmodel
AT tsuguchikatabaru regularizingdataforimprovingexecutiontimeofnlpmodel
AT akihikokasagi regularizingdataforimprovingexecutiontimeofnlpmodel