Robustness of Big Language Modeling in Finance
With the gradual entry of artificial intelligence into all aspects of people’s lives, people begin to use big language models to solve problems in various fields. In the financial field, people use financial big prediction models to solve problems such as stock prediction, risk assessment, etc., but...
Saved in:
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
EDP Sciences
2025-01-01
|
Series: | ITM Web of Conferences |
Online Access: | https://www.itm-conferences.org/articles/itmconf/pdf/2025/01/itmconf_dai2024_02003.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | With the gradual entry of artificial intelligence into all aspects of people’s lives, people begin to use big language models to solve problems in various fields. In the financial field, people use financial big prediction models to solve problems such as stock prediction, risk assessment, etc., but the big language models can be incorrect due to model hallucination and adversarial attacks. Therefore, investigating the robustness of large language models in finance is the main topic of this article, and searches the literature using the keywords “large language model”, “adversarial attack”, “model illusion”, etc. in recent years. We searched the literature in recent years. The existing literature explains the causes of adversarial attacks and model illusion, and methods that enhance the robustness of large language models are come up. It is shown that an attacker can trigger the model illusion of a large language model through an adversarial attack to reduce the reliability of the large language model. There is a lack of specific datasets of big language models in the financial domain to get a solution to improve the big language models in the financial domain in a better way. Future research should be specific in the financial domain for further adversarial training and robustness optimization of big language models in the financial domain. |
---|---|
ISSN: | 2271-2097 |