Aligning LLMs to Improve Specificity of Preventive Action Recommendations for Industrial Safety

Improving industrial safety using NLP technologies supports the triple bottom line of environmental, social and economic sustainability. Rapid evolution of Large Language Models (LLMs) has potential to transform the industrial safety and improve disaster mitigation. In this paper, we evaluate and b...

Full description

Saved in:
Bibliographic Details
Main Authors: Siddharth Tumre, Sumit Koundanya, Shubham Kumbhar, Sangameshwar Patil
Format: Article
Language:English
Published: LibraryPress@UF 2025-05-01
Series:Proceedings of the International Florida Artificial Intelligence Research Society Conference
Subjects:
Online Access:https://journals.flvc.org/FLAIRS/article/view/138959
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Improving industrial safety using NLP technologies supports the triple bottom line of environmental, social and economic sustainability. Rapid evolution of Large Language Models (LLMs) has potential to transform the industrial safety and improve disaster mitigation. In this paper, we evaluate and benchmark the feasibility of using Falcon and Phi3 open-source LLMs for the task of generating preventive recommendations to improve industrial safety. Based on domain expert evaluation, we find that the standard, pre-trained LLMs have limitations concerning the quality and quantity of recommendations generated. They can be of diverse quality, such as specific, generic, or irrelevant. We find that the pre-trained version of Phi3 is better than base version of Falcon for the proposed task. We show that the quantity, output format as well as domain-awareness of the Falcon can be significantly improved using supervised fine-tuning (SFT) with a small amount of labeled data that illustrates the expected output. In spite of the quality improvement post-SFT and the high societal and economic impact of the application, there are still many areas of improvement, which we point to as part of future work. To the best of our knowledge, this is the first attempt to align LLMs for industrial safety recommendation improvement.
ISSN:2334-0754
2334-0762