JUCS - Journal of Universal Computer Science 31(9): 1004-1014, doi: 10.3897/jucs.164745
DSGD++: Reducing Uncertainty and Training Time in the DSGD Classifier through a Mass Assignment Function Initialization Technique
expand article infoAik Tarkhanyan, Ashot Harutyunyan§
‡ Yerevan State University, Yerevan, Armenia§ Institute for Informatics and Automation Problems NAS RA, Yerevan, Armenia
Open Access
Abstract
Several studies have shown that the Dempster–Shafer theory (DST) can be successfully applied to scenarios where model interpretability is essential. Although DST-based algorithms offer significant benefits, they face challenges in terms of efficiency. We present a method for the Dempster-Shafer Gradient Descent (DSGD) algorithm that significantly reduces training time—by a factor of 1.6—and also reduces the uncertainty of each rule (a condition on features leading to a class label) by a factor of 2.1, while preserving accuracy comparable to other statistical classification techniques. Our main contribution is the introduction of a ”confidence” level for each rule. Initially, we define the ”representativeness” of a data point as the distance from its class’s center. Afterward, each rule’s confidence is calculated based on representativeness of data points it covers. This confidence is incorporated into the initialization of the corresponding Mass Assignment Function (MAF), providing a better starting point for the DSGD’s optimizer and enabling faster, more effective convergence. The code is available at https://github.com/HaykTarkhanyan/DSGD-Enhanced.
Keywords
Dempster-Shafer Theory, Interpretability, KMeans, Mass Assignment Functions, Classification
login to comment