Intelligent text similarity assessment using Roberta with integrated chaotic perturbation optimization techniques

Abstract Precisely evaluating text similarity remains a fundamental challenge in Natural Language Processing (NLP), with widespread applications in plagiarism detection, information retrieval, semantic analysis, and recommendation systems. Traditional approaches often suffer from overfitting, local...

Full description

Saved in:
Bibliographic Details
Main Authors: Esraa Hassan, Amira Samy Talaat, M. A. Elsabagh
Format: Article
Language:English
Published: SpringerOpen 2025-07-01
Series:Journal of Big Data
Subjects:
Online Access:https://doi.org/10.1186/s40537-025-01233-3
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Precisely evaluating text similarity remains a fundamental challenge in Natural Language Processing (NLP), with widespread applications in plagiarism detection, information retrieval, semantic analysis, and recommendation systems. Traditional approaches often suffer from overfitting, local optima stagnation, and difficulty capturing deep semantic relationships. To address these challenges, this paper introduces an Intelligent Text Similarity Assessment Model that integrates Robustly Optimized Bidirectional Encoder Representations from Transformers (RoBERTa) with Chaotic Sand Cat Swarm Optimization (CHSCSO), a novel swarm intelligence-based optimization method inspired by chaotic dynamics. The model leverages RoBERTa’s robust contextual embeddings to extract deep semantic representations while utilizing CHSCSO’s controlled chaotic perturbations to optimize hyperparameters dynamically. This integration enhances model generalization, mitigates overfitting, and improves the trade-off between exploration and exploitation during training. CHSCSO refines the parameter search space by employing chaotic maps, ensuring a more adaptive and efficient training process. Extensive experiments on multiple benchmark datasets, including Semantic Textual Similarity (STS) and Textual Entailment (TE), demonstrate the model’s superiority over standard RoBERTa fine-tuning and conventional baselines that reach cosine similarity scores that are clustered at 0.996. The optimized model achieves higher accuracy and improved stability and exhibits faster convergence in text similarity tasks.
ISSN:2196-1115