JUCS - Journal of Universal Computer Science 28(11): 1169-1192, doi: 10.3897/jucs.79905
Feature Selection Using Neighborhood based Entropy
expand article infoFatemeh Farnaghi-Zadeh, Mohsen Rahmani, Maryam Amiri
‡ Department of Computer Engineering, Faculty of Engineering, Arak University, 38156-8-8349 Arak, Iran, Arak, Iran
Open Access
Abstract

Feature selection plays an important role as a preprocessing step for pattern recognition and machine learning. The goal of feature selection is to determine an optimal subset of relevant features out of a large number of features. The neighborhood discrimination index (NDI) is one of the newest and the most efficient measures to determine distinguishing ability of a feature subset. NDI is computed based on a neighborhood radius (E). Due to the significant impact of E on NDI, selecting an appropriate value of E for each data set might be challenging and very time-consuming. This paper proposes a new approach based on targEt PointS To computE neIgh- borhood relatioNs (EPSTEIN). At first, all the data points are sorted in the descending order of their density. Then, the highest density data points are selected as many as the number of classes. To determine the neighborhood relations, the circles centered on the target points are drawn and the points inside or on the circles are considered to be neighbors. In the next step, the significance of each feature is computed and a greedy algorithm selects appropriate features. The performance of the proposed approach is compared to both the commonest and newest methods of feature selection. The experimental results show that EPSTEIN could select more efficient subsets of features and improve the prediction accuracy of classifiers in comparison to the other state-of-the-art methods such as Correlation-based Feature Selection (CFS), Fast Correlation-Based Filter (FCBF), Heuris- tic Algorithm Based on Neighborhood Discrimination Index (HANDI), Ranking Based Feature Inclusion for Optimal Feature Subset (KNFI), Ranking Based Feature Elimination (KNFE) and Principal Component Analysis and Information Gain (PCA-IG).

Keywords
Feature Selection, Discrimination Index, Neighborhood Relations, Density, Entropy, Distinguishing Ability