Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Issue title: Special Section: Soft Computing and Intelligent Systems: Techniques and Applications
Guest editors: Sabu M. Thampi, El-Sayed M. El-Alfy, Sushmita Mitra and Ljiljana Trajkovic
Article type: Research Article
Authors: Dai, Yinglonga | Wang, Guojunb; * | Li, Kuan-Chingc
Affiliations: [a] School of Information Science and Engineering, Central South University, Changsha, China | [b] School of Computer Science and Educational Software, Guangzhou University, Guangzhou, China | [c] Department of Computer Science and Information Engineering, Providence University, Taichung, Taiwan and Hubei University of Education, Wuhan, China
Correspondence: [*] Corresponding author. Guojun Wang, School of Computer Science and Educational Software, Guangzhou University, Guangzhou 510006, China. E-mail: [email protected].
Abstract: Deep Neural Networks (DNNs) have powerful recognition abilities to classify different objects. Although the models of DNNs can reach very high accuracy even beyond human level, they are regarded as black boxes that are absent of interpretability. In the training process of DNNs, abstract features can be automatically extracted from high-dimensional data, such as images. However, the extracted features are usually mapped into a representation space that is not aligned with human knowledge. In some cases, the interpretability is necessary, e.g. medical diagnoses. For the purpose of aligning the representation space with human knowledge, this paper proposes a kind of DNNs, termed as Conceptual Alignment Deep Neural Networks (CADNNs), which can produce interpretable representations in the hidden layers. In CADNNs, some hidden neurons are selected as conceptual neurons to extract the human-formed concepts, while other hidden neurons, called free neurons, can be trained freely. All hidden neurons will contribute to the final classification results. Experiments demonstrate that the CADNNs can keep up with the accuracy of DNNs, even though CADNNs have extra constraints of conceptual neurons. Experiments also reveal that the free neurons could learn some concepts aligned with human knowledge in some cases.
Keywords: Deep neural networks, conceptual alignment, interpretability, supervised learning, representation learning
DOI: 10.3233/JIFS-169457
Journal: Journal of Intelligent & Fuzzy Systems, vol. 34, no. 3, pp. 1631-1642, 2018
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]