Exploration of the Efficiency of SLM-Enabled Platforms for Everyday Tasks

This study explores the potential of Small Language Models (SLMs) as an efficient and secure alternative to larger models like GPT-4 for various natural language processing (NLP) tasks. With growing concerns around data privacy and the resource-intensiveness of large models, SLMs present a promising...

Full description

Saved in:
Bibliographic Details
Main Authors: Volodymyr Rusinov, Nikita Basenko
Format: Article
Language:English
Published: Anhalt University of Applied Sciences 2025-04-01
Series:Proceedings of the International Conference on Applied Innovations in IT
Subjects:
Online Access:https://icaiit.org/paper.php?paper=13th_ICAIIT_1/2_8
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study explores the potential of Small Language Models (SLMs) as an efficient and secure alternative to larger models like GPT-4 for various natural language processing (NLP) tasks. With growing concerns around data privacy and the resource-intensiveness of large models, SLMs present a promising solution for research and applications requiring fast, cost-effective, and locally deployable models. The research evaluates several SLMs across tasks such as translation, summarization, Named Entity Recognition (NER), text generation, classification, and retrieval-augmented generation (RAG), comparing their performance against larger counterparts. Models were assessed using a range of metrics specific to the intended task. Results show that smaller models perform well on complex tasks, often rivalling or even outperforming larger models like Phi-3.5. The study concludes that SLMs offer an optimal trade-off between performance and computational efficiency, particularly in environments where data security and resource constraints are critical. The findings highlight the growing viability of smaller models for a wide range of real-world applications.
ISSN:2199-8876