DSGD++: Reducing Uncertainty and Training Time in the DSGD Classifier through a Mass Assignment Function Initialization Technique

Several studies have shown that the Dempster–Shafer theory (DST) can be successfully applied to scenarios where model interpretability is essential. Although DST-based algorithms offer significant benefits, they face challenges in terms of efficiency. We present a method for the Dempst...

Full description

Saved in:
Bibliographic Details
Main Authors: Aik Tarkhanyan, Ashot Harutyunyan
Format: Article
Language:English
Published: Graz University of Technology 2025-08-01
Series:Journal of Universal Computer Science
Subjects:
Online Access:https://lib.jucs.org/article/164745/download/pdf/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Several studies have shown that the Dempster–Shafer theory (DST) can be successfully applied to scenarios where model interpretability is essential. Although DST-based algorithms offer significant benefits, they face challenges in terms of efficiency. We present a method for the Dempster-Shafer Gradient Descent (DSGD) algorithm that significantly reduces training time—by a factor of 1.6—and also reduces the uncertainty of each rule (a condition on features leading to a class label) by a factor of 2.1, while preserving accuracy comparable to other statistical classification techniques. Our main contribution is the introduction of a ”confidence” level for each rule. Initially, we define the ”representativeness” of a data point as the distance from its class’s center. Afterward, each rule’s confidence is calculated based on representativeness of data points it covers. This confidence is incorporated into the initialization of the corresponding Mass Assignment Function (MAF), providing a better starting point for the DSGD’s optimizer and enabling faster, more effective convergence. The code is available at https://github.com/HaykTarkhanyan/DSGD-Enhanced.
ISSN:0948-6968