MindLLM: Lightweight large language model pre-training, evaluation and domain application

Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence. While general artificial intelligence is leveraged by developing increasingly large-scale models, there could be another b...

Full description

Saved in:
Bibliographic Details
Main Authors: Yizhe Yang, Huashan Sun, Jiawei Li, Runheng Liu, Yinghao Li, Yuhang Liu, Yang Gao, Heyan Huang
Format: Article
Language:English
Published: KeAi Communications Co. Ltd. 2024-01-01
Series:AI Open
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2666651024000111
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850174818271363072
author Yizhe Yang
Huashan Sun
Jiawei Li
Runheng Liu
Yinghao Li
Yuhang Liu
Yang Gao
Heyan Huang
author_facet Yizhe Yang
Huashan Sun
Jiawei Li
Runheng Liu
Yinghao Li
Yuhang Liu
Yang Gao
Heyan Huang
author_sort Yizhe Yang
collection DOAJ
description Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence. While general artificial intelligence is leveraged by developing increasingly large-scale models, there could be another branch to develop lightweight custom models that better serve certain domains, taking into account the high cost of training and deploying LLMs and the scarcity of resources. In this paper, we present MindLLM, a novel series of bilingual lightweight large language models, trained from scratch, alleviating such burdens by offering models with 1.3 billion and 3 billion parameters. A thorough account of experiences accrued during large model development is given, covering every step of the process, including data construction, model architecture, evaluation, and applications. Such insights are hopefully valuable for fellow academics and developers. MindLLM consistently matches or surpasses the performance of other open-source larger models on some public benchmarks. We also introduce an innovative instruction tuning framework tailored for smaller models to enhance their capabilities efficiently. Moreover, we explore the application of MindLLM in specific vertical domains such as law and finance, underscoring the agility and adaptability of our lightweight models.
format Article
id doaj-art-7a6fdeef47df481f9782d2dd6048c384
institution OA Journals
issn 2666-6510
language English
publishDate 2024-01-01
publisher KeAi Communications Co. Ltd.
record_format Article
series AI Open
spelling doaj-art-7a6fdeef47df481f9782d2dd6048c3842025-08-20T02:19:34ZengKeAi Communications Co. Ltd.AI Open2666-65102024-01-01515518010.1016/j.aiopen.2024.08.001MindLLM: Lightweight large language model pre-training, evaluation and domain applicationYizhe Yang0Huashan Sun1Jiawei Li2Runheng Liu3Yinghao Li4Yuhang Liu5Yang Gao6Heyan Huang7School of Computer Science Beijing Institute of Technology, Beijing, China; Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications, Beijing, China; Beijing Institute of Technology Southeast Academy of Information Technology, Putian, Fujian, ChinaSchool of Computer Science Beijing Institute of Technology, Beijing, China; Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications, Beijing, China; Beijing Institute of Technology Southeast Academy of Information Technology, Putian, Fujian, ChinaSchool of Computer Science Beijing Institute of Technology, Beijing, China; Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications, Beijing, China; Beijing Institute of Technology Southeast Academy of Information Technology, Putian, Fujian, ChinaSchool of Computer Science Beijing Institute of Technology, Beijing, China; Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications, Beijing, China; Beijing Institute of Technology Southeast Academy of Information Technology, Putian, Fujian, ChinaSchool of Computer Science Beijing Institute of Technology, Beijing, China; Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications, Beijing, China; Beijing Institute of Technology Southeast Academy of Information Technology, Putian, Fujian, ChinaSchool of Computer Science Beijing Institute of Technology, Beijing, China; Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications, Beijing, China; Beijing Institute of Technology Southeast Academy of Information Technology, Putian, Fujian, ChinaSchool of Computer Science Beijing Institute of Technology, Beijing, China; Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications, Beijing, China; Beijing Institute of Technology Southeast Academy of Information Technology, Putian, Fujian, ChinaCorresponding author at: School of Computer Science Beijing Institute of Technology, Beijing, China.; School of Computer Science Beijing Institute of Technology, Beijing, China; Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications, Beijing, China; Beijing Institute of Technology Southeast Academy of Information Technology, Putian, Fujian, ChinaLarge Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence. While general artificial intelligence is leveraged by developing increasingly large-scale models, there could be another branch to develop lightweight custom models that better serve certain domains, taking into account the high cost of training and deploying LLMs and the scarcity of resources. In this paper, we present MindLLM, a novel series of bilingual lightweight large language models, trained from scratch, alleviating such burdens by offering models with 1.3 billion and 3 billion parameters. A thorough account of experiences accrued during large model development is given, covering every step of the process, including data construction, model architecture, evaluation, and applications. Such insights are hopefully valuable for fellow academics and developers. MindLLM consistently matches or surpasses the performance of other open-source larger models on some public benchmarks. We also introduce an innovative instruction tuning framework tailored for smaller models to enhance their capabilities efficiently. Moreover, we explore the application of MindLLM in specific vertical domains such as law and finance, underscoring the agility and adaptability of our lightweight models.http://www.sciencedirect.com/science/article/pii/S2666651024000111Large language modelLight weightBilingual
spellingShingle Yizhe Yang
Huashan Sun
Jiawei Li
Runheng Liu
Yinghao Li
Yuhang Liu
Yang Gao
Heyan Huang
MindLLM: Lightweight large language model pre-training, evaluation and domain application
AI Open
Large language model
Light weight
Bilingual
title MindLLM: Lightweight large language model pre-training, evaluation and domain application
title_full MindLLM: Lightweight large language model pre-training, evaluation and domain application
title_fullStr MindLLM: Lightweight large language model pre-training, evaluation and domain application
title_full_unstemmed MindLLM: Lightweight large language model pre-training, evaluation and domain application
title_short MindLLM: Lightweight large language model pre-training, evaluation and domain application
title_sort mindllm lightweight large language model pre training evaluation and domain application
topic Large language model
Light weight
Bilingual
url http://www.sciencedirect.com/science/article/pii/S2666651024000111
work_keys_str_mv AT yizheyang mindllmlightweightlargelanguagemodelpretrainingevaluationanddomainapplication
AT huashansun mindllmlightweightlargelanguagemodelpretrainingevaluationanddomainapplication
AT jiaweili mindllmlightweightlargelanguagemodelpretrainingevaluationanddomainapplication
AT runhengliu mindllmlightweightlargelanguagemodelpretrainingevaluationanddomainapplication
AT yinghaoli mindllmlightweightlargelanguagemodelpretrainingevaluationanddomainapplication
AT yuhangliu mindllmlightweightlargelanguagemodelpretrainingevaluationanddomainapplication
AT yanggao mindllmlightweightlargelanguagemodelpretrainingevaluationanddomainapplication
AT heyanhuang mindllmlightweightlargelanguagemodelpretrainingevaluationanddomainapplication