You are in:Home/Publications/A Context-Supported Deep Learning Framework for Multimodal Brain Imaging Classification

Dr. Ahmed Hassan Mohammed Fares :: Publications:

Title:
A Context-Supported Deep Learning Framework for Multimodal Brain Imaging Classification
Authors: Jianmin Jiang ; Ahmed Fares ; Sheng-Hua Zhong
Year: 2019
Keywords: Deep learning , electroencephalogram (EEG) , explicit learning modality , implicit learning modality , object classification
Journal: IEEE Transactions on Human-Machine Systems
Volume: Not Available
Issue: Not Available
Pages: Not Available
Publisher: Not Available
Local/International: International
Paper Link:
Full paper Not Available
Supplementary materials Not Available
Abstract:

Over the past decade, “content-based” multimedia systems have realized success. By comparison, brain imaging and classification systems demand more efforts for improvement with respect to accuracy, generalization, and interpretation. The relationship between electroencephalogram (EEG) signals and corresponding multimedia content needs to be further explored. In this paper, we integrate implicit and explicit learning modalities into a context-supported deep learning framework. We propose an improved solution for the task of brain imaging classification via EEG signals. In our proposed framework, we introduce a consistency test by exploiting the context of brain images and establishing a mapping between visual-level features and cognitive-level features inferred based on EEG signals. In this way, a multimodal approach can be developed to deliver an improved solution for brain imaging and its classification based on explicit learning modalities and research from the image processing community. In addition, a number of fusion techniques are investigated in this work to optimize individual classification results. Extensive experiments have been carried out, and their results demonstrate the effectiveness of our proposed framework. In comparison with the existing state-of-the-art approaches, our proposed framework achieves superior performance in terms of not only the standard visual object classification criteria, but also the exploitation of transfer learning. For the convenience of research dissemination, we make the source code publicly available for downloading at GitHub (https://github.com/aneeg/dual-modal-learning).

Google ScholarAcdemia.eduResearch GateLinkedinFacebookTwitterGoogle PlusYoutubeWordpressInstagramMendeleyZoteroEvernoteORCIDScopus