Improving Multi-hop Logical Reasoning in Small LMs with LoRA Training

Language models show increasing performance in reasoning tasks. However, logical reasoning in complex tasks remains a challenge. This challenge is more apparent when resources are limited, such as using smaller language models or small datasets for knowledge extraction. How can language models be u...

Full description

Saved in:
Bibliographic Details
Main Authors: Onur Bilgin, Abdullah As Sami, Suraj Kumar, John Licato
Format: Article
Language:English
Published: LibraryPress@UF 2025-05-01
Series:Proceedings of the International Florida Artificial Intelligence Research Society Conference
Subjects:
Online Access:https://journals.flvc.org/FLAIRS/article/view/138643
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Language models show increasing performance in reasoning tasks. However, logical reasoning in complex tasks remains a challenge. This challenge is more apparent when resources are limited, such as using smaller language models or small datasets for knowledge extraction. How can language models be used in this case to generalize and solve complex logical reasoning tasks? In this work, we show that LoRA training of language models with small datasets can improve logical reasoning and transferability for fact extraction. In our tests, we extracted facts with CoT-prompting to use them as input to the rule set. We explored our experiments with the StepGame, Navset, Comparison, and TriviaQA datasets and evaluated our results with precision, recall, and accuracy metrics. We compared the results against untrained language models. Our results show that LoRA training improves logical reasoning even for out-of-distribution samples.
ISSN:2334-0754
2334-0762