Towards a benchmark dataset for large language models in the context of process automation
The field of process automation possesses a substantial corpus of textual documentation that can be leveraged with Large Language Models (LLMs) and Natural Language Understanding (NLU) systems. Recent advancements in diverse LLMs, available in open source, present an opportunity to utilize them effe...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2024-12-01
|
| Series: | Digital Chemical Engineering |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2772508124000486 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850248187341701120 |
|---|---|
| author | Tejennour Tizaoui Ruomu Tan |
| author_facet | Tejennour Tizaoui Ruomu Tan |
| author_sort | Tejennour Tizaoui |
| collection | DOAJ |
| description | The field of process automation possesses a substantial corpus of textual documentation that can be leveraged with Large Language Models (LLMs) and Natural Language Understanding (NLU) systems. Recent advancements in diverse LLMs, available in open source, present an opportunity to utilize them effectively in this area. However, LLMs are pre-trained on general textual data and lack knowledge in more specialized and niche areas such as process automation. Furthermore, the lack of datasets specifically tailored to process automation makes it difficult to assess the effectiveness of LLMs in this domain accurately. This paper aims to lay the foundation for creating a multitask benchmark for evaluating and adapting LLMs in process automation. In the paper, we introduce a novel workflow for semi-automated data generation, specifically tailored to creating extractive Question Answering (QA) datasets. The proposed methodology in this paper involves extracting passages from academic papers focusing on process automation, generating corresponding questions, and subsequently annotating and evaluating the dataset. The dataset initially created also undergoes data augmentation and is evaluated using metrics for semantic similarity. This study then benchmarked six LLMs on the newly created extractive QA dataset for process automation. |
| format | Article |
| id | doaj-art-e182dad619b44c9ebe9c4949d86c8ebf |
| institution | OA Journals |
| issn | 2772-5081 |
| language | English |
| publishDate | 2024-12-01 |
| publisher | Elsevier |
| record_format | Article |
| series | Digital Chemical Engineering |
| spelling | doaj-art-e182dad619b44c9ebe9c4949d86c8ebf2025-08-20T01:58:46ZengElsevierDigital Chemical Engineering2772-50812024-12-011310018610.1016/j.dche.2024.100186Towards a benchmark dataset for large language models in the context of process automationTejennour Tizaoui0Ruomu Tan1Tejennour Tizaoui, Technical University of Munich, Chair for Data Processing, Munich, Germany; Corresponding author.Ruomu Tan, ABB Corporate Research Center, Ladenburg, GermanyThe field of process automation possesses a substantial corpus of textual documentation that can be leveraged with Large Language Models (LLMs) and Natural Language Understanding (NLU) systems. Recent advancements in diverse LLMs, available in open source, present an opportunity to utilize them effectively in this area. However, LLMs are pre-trained on general textual data and lack knowledge in more specialized and niche areas such as process automation. Furthermore, the lack of datasets specifically tailored to process automation makes it difficult to assess the effectiveness of LLMs in this domain accurately. This paper aims to lay the foundation for creating a multitask benchmark for evaluating and adapting LLMs in process automation. In the paper, we introduce a novel workflow for semi-automated data generation, specifically tailored to creating extractive Question Answering (QA) datasets. The proposed methodology in this paper involves extracting passages from academic papers focusing on process automation, generating corresponding questions, and subsequently annotating and evaluating the dataset. The dataset initially created also undergoes data augmentation and is evaluated using metrics for semantic similarity. This study then benchmarked six LLMs on the newly created extractive QA dataset for process automation.http://www.sciencedirect.com/science/article/pii/S2772508124000486Large language models (LLMs)Natural language understanding (NLU) process automationExtractive question answering (QA)Natural language processing (NLP) |
| spellingShingle | Tejennour Tizaoui Ruomu Tan Towards a benchmark dataset for large language models in the context of process automation Digital Chemical Engineering Large language models (LLMs) Natural language understanding (NLU) process automation Extractive question answering (QA) Natural language processing (NLP) |
| title | Towards a benchmark dataset for large language models in the context of process automation |
| title_full | Towards a benchmark dataset for large language models in the context of process automation |
| title_fullStr | Towards a benchmark dataset for large language models in the context of process automation |
| title_full_unstemmed | Towards a benchmark dataset for large language models in the context of process automation |
| title_short | Towards a benchmark dataset for large language models in the context of process automation |
| title_sort | towards a benchmark dataset for large language models in the context of process automation |
| topic | Large language models (LLMs) Natural language understanding (NLU) process automation Extractive question answering (QA) Natural language processing (NLP) |
| url | http://www.sciencedirect.com/science/article/pii/S2772508124000486 |
| work_keys_str_mv | AT tejennourtizaoui towardsabenchmarkdatasetforlargelanguagemodelsinthecontextofprocessautomation AT ruomutan towardsabenchmarkdatasetforlargelanguagemodelsinthecontextofprocessautomation |