FArSS: Fast and Efficient Semantic Question Similarity in Arabic
This paper addresses the challenge of efficient semantic question similarity in Arabic by leveraging fastText embeddings and a simple neural network architecture. Our model (FArSS) avoids the complexities of recurrent connections and attention mechanisms, resulting in a streamlined and efficient app...
Saved in:
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10840214/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper addresses the challenge of efficient semantic question similarity in Arabic by leveraging fastText embeddings and a simple neural network architecture. Our model (FArSS) avoids the complexities of recurrent connections and attention mechanisms, resulting in a streamlined and efficient approach. With strategic data augmentation, our model achieves an F1-score of 0.928, closely competing with state-of-the-art models that rely on advanced architectures employing self-attention mechanisms. Additionally, our model outperforms both GPT-4o and GPT-4 in semantic question similarity in Arabic, underscoring the potential of specialized, efficient models to surpass large language models in specific tasks. This work demonstrates that our method not only maintains high performance but also ensures fast training and inference times. The practical advantages of our approach make it especially suitable for real-time applications, contributing to the development of more effective and efficient natural language processing systems. Our findings highlight the continued importance of efficient tailored models in addressing specific natural language processing challenges. |
---|---|
ISSN: | 2169-3536 |