You are in:Home/Publications/Performance enhancement of standard fuzzy majority voting-based fusion of probabilistic classifiers

Prof. Mahmoud Salah Mahmoud Goma :: Publications:

Title:
Performance enhancement of standard fuzzy majority voting-based fusion of probabilistic classifiers
Authors: Mahmoud Salah
Year: 2019
Keywords: MCSs, FMV, High Resolution (HR) satellite imagery, feature extraction
Journal: Journal of Geomatics
Volume: 13
Issue: 1
Pages: 24-33
Publisher: Indian Society of Geomatics
Local/International: Local
Paper Link: Not Available
Full paper Not Available
Supplementary materials Not Available
Abstract:

Combining classifiers is essential for feature extraction and mapping applications. This paper proposes an approach to improve the performance of one of the most frequently used Multiple Classifier Systems (MCSs), namely the Fuzzy Majority Voting (FMV). First, a set of texture attributes has been generated from a 0.82mpan-sharpened IKONOS image covers the test area. The generated attributes along with the original image have been applied as input for three-member classifiers: Artificial Neural Networks (ANN); Support Vector Machines (SVM); and Classification Trees (CT). Before combination, a weighting criterion has been determined, based on the performances of each member classifier, and assigned to the output of that classifier. After that, The FMV has been applied for combining the weighted results from the three-member classifiers to extract buildings (B), roads (R) and vegetation (G). The proposed method has been tested and compared with the three-member classifiers as well as the standard FMV. The results have been analyzed considering four different aspects: (1) overall accuracy; (2) class accuracy; (3) sensitivity to training sample size; and (4) computational complexities. The proposed method resulted in an overall classification accuracy of about 95.60%, which is 3.88, 6, 8.51 and 1.24% better than ANN, SVM, CT, and standard FMV respectively. On the other hand, most of the class-accuracies are much better and less variable than those obtained by any member classifier as well as the standard FMV. While the proposed method is stable and always outperforms individual classifiers even in the cases of small size training samples, its computational cost is still comparable with that of standard FMV.

Google ScholarAcdemia.eduResearch GateLinkedinFacebookTwitterGoogle PlusYoutubeWordpressInstagramMendeleyZoteroEvernoteORCIDScopus