AZIM: Arabic-Centric Zero-Shot Inference for Multilingual Topic Modeling With Enhanced Performance on Summarized Text

Topic modeling is an unsupervised learning technique, that is extensively used for discovering latent topics in huge text corpora. However, existing models often fall short in cross-lingual scenarios, particularly for morphologically rich and low-resource languages such as Arabic. Cross-lingual topi...

Full description

Saved in:
Bibliographic Details
Main Authors: Sania Aftar, Abdul Rehman, Sonia Bergamaschi, Luca Gagliardelli
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11058925/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Topic modeling is an unsupervised learning technique, that is extensively used for discovering latent topics in huge text corpora. However, existing models often fall short in cross-lingual scenarios, particularly for morphologically rich and low-resource languages such as Arabic. Cross-lingual topic analysis extracts shared topics across languages but often relies on resource-intensive datasets or limited translation dictionaries, restricting its diversity and effectiveness. Transfer learning provides a promising solution to these challenges. This presents AZIM, an Arabic-centric extension of ZeroShotTM, adapted to use Arabic as the training language for zero-shot multilingual topic modeling. The model’s performance is evaluated across diverse Latin-script and non-Latin-script languages, focusing on its adaptability to Modern Standard Arabic (MSA) and Classical Arabic (CA). Additionally, the study explores the impact of summarized versus general text. The results illustrate that the summarized versions of the datasets consistently outperform their baselines in terms of interpretability and coherence. Furthermore, the model also illustrates robust cross-lingual generalization as shown by non-Latin scripts such as Persian and Urdu outperforming certain Latin-based languages. However, variations in performance between the languages show the complex nature of multilingual embeddings. The performance difference between Modern Standard Arabic and Classical Arabic reveals that the limitations of the pre-trained embeddings, namely, their bias towards modern corpora. These findings point out the importance of adapting techniques for morphologically rich and low-resource languages for the purpose of enhancing the cross-lingual topic modeling.
ISSN:2169-3536