DSGD++: Reducing Uncertainty and Training Time in the DSGD Classifier through a Mass Assignment Function Initialization Technique

Several studies have shown that the Dempster–Shafer theory (DST) can be successfully applied to scenarios where model interpretability is essential. Although DST-based algorithms offer significant benefits, they face challenges in terms of efficiency. We present a method for the Dempst...

Full description

Saved in:
Bibliographic Details
Main Authors: Aik Tarkhanyan, Ashot Harutyunyan
Format: Article
Language:English
Published: Graz University of Technology 2025-08-01
Series:Journal of Universal Computer Science
Subjects:
Online Access:https://lib.jucs.org/article/164745/download/pdf/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849331502678016000
author Aik Tarkhanyan
Ashot Harutyunyan
author_facet Aik Tarkhanyan
Ashot Harutyunyan
author_sort Aik Tarkhanyan
collection DOAJ
description Several studies have shown that the Dempster–Shafer theory (DST) can be successfully applied to scenarios where model interpretability is essential. Although DST-based algorithms offer significant benefits, they face challenges in terms of efficiency. We present a method for the Dempster-Shafer Gradient Descent (DSGD) algorithm that significantly reduces training time—by a factor of 1.6—and also reduces the uncertainty of each rule (a condition on features leading to a class label) by a factor of 2.1, while preserving accuracy comparable to other statistical classification techniques. Our main contribution is the introduction of a ”confidence” level for each rule. Initially, we define the ”representativeness” of a data point as the distance from its class’s center. Afterward, each rule’s confidence is calculated based on representativeness of data points it covers. This confidence is incorporated into the initialization of the corresponding Mass Assignment Function (MAF), providing a better starting point for the DSGD’s optimizer and enabling faster, more effective convergence. The code is available at https://github.com/HaykTarkhanyan/DSGD-Enhanced.
format Article
id doaj-art-aee0f3ad641f4af5ba06ebacdc924b9c
institution Kabale University
issn 0948-6968
language English
publishDate 2025-08-01
publisher Graz University of Technology
record_format Article
series Journal of Universal Computer Science
spelling doaj-art-aee0f3ad641f4af5ba06ebacdc924b9c2025-08-20T03:46:33ZengGraz University of TechnologyJournal of Universal Computer Science0948-69682025-08-013191004101410.3897/jucs.164745164745DSGD++: Reducing Uncertainty and Training Time in the DSGD Classifier through a Mass Assignment Function Initialization TechniqueAik Tarkhanyan0Ashot Harutyunyan1Yerevan State UniversityInstitute for Informatics and Automation Problems NAS RASeveral studies have shown that the Dempster–Shafer theory (DST) can be successfully applied to scenarios where model interpretability is essential. Although DST-based algorithms offer significant benefits, they face challenges in terms of efficiency. We present a method for the Dempster-Shafer Gradient Descent (DSGD) algorithm that significantly reduces training time—by a factor of 1.6—and also reduces the uncertainty of each rule (a condition on features leading to a class label) by a factor of 2.1, while preserving accuracy comparable to other statistical classification techniques. Our main contribution is the introduction of a ”confidence” level for each rule. Initially, we define the ”representativeness” of a data point as the distance from its class’s center. Afterward, each rule’s confidence is calculated based on representativeness of data points it covers. This confidence is incorporated into the initialization of the corresponding Mass Assignment Function (MAF), providing a better starting point for the DSGD’s optimizer and enabling faster, more effective convergence. The code is available at https://github.com/HaykTarkhanyan/DSGD-Enhanced.https://lib.jucs.org/article/164745/download/pdf/Dempster-Shafer TheoryInterpretabilityKMeans
spellingShingle Aik Tarkhanyan
Ashot Harutyunyan
DSGD++: Reducing Uncertainty and Training Time in the DSGD Classifier through a Mass Assignment Function Initialization Technique
Journal of Universal Computer Science
Dempster-Shafer Theory
Interpretability
KMeans
title DSGD++: Reducing Uncertainty and Training Time in the DSGD Classifier through a Mass Assignment Function Initialization Technique
title_full DSGD++: Reducing Uncertainty and Training Time in the DSGD Classifier through a Mass Assignment Function Initialization Technique
title_fullStr DSGD++: Reducing Uncertainty and Training Time in the DSGD Classifier through a Mass Assignment Function Initialization Technique
title_full_unstemmed DSGD++: Reducing Uncertainty and Training Time in the DSGD Classifier through a Mass Assignment Function Initialization Technique
title_short DSGD++: Reducing Uncertainty and Training Time in the DSGD Classifier through a Mass Assignment Function Initialization Technique
title_sort dsgd reducing uncertainty and training time in the dsgd classifier through a mass assignment function initialization technique
topic Dempster-Shafer Theory
Interpretability
KMeans
url https://lib.jucs.org/article/164745/download/pdf/
work_keys_str_mv AT aiktarkhanyan dsgdreducinguncertaintyandtrainingtimeinthedsgdclassifierthroughamassassignmentfunctioninitializationtechnique
AT ashotharutyunyan dsgdreducinguncertaintyandtrainingtimeinthedsgdclassifierthroughamassassignmentfunctioninitializationtechnique