Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Shukla, Shilpia; * | Jain, Madhub
Affiliations: [a] Department of Electronics & Communication Engineering, Mahatma Gandhi Mission’s College of Engineering & Technology, A-9 Sector 62, Noida, India | [b] Department of Electronics & Communication Engineering, Jaypee Institute of Information Technology, A-10, Sector 62, Noida (Uttar Pradesh), India
Correspondence: [*] Corresponding author. Shilpi Shukla, Department of Electronics & Communication Engineering, Mahatma Gandhi Mission’s College of Engineering&Technology, A-9 Sector 62, Noida, India. E-mail: [email protected].
Abstract: Human emotion recognition with the evaluation of speech signals is an emerging topic in recent decades. Emotion recognition through speech signals is relatively confusing because of the speaking style, voice quality, cultural background of the speaker, environment, etc. Even though numerous signal processing methods and frameworks exists to detect and characterize the speech signal’s emotions, they do not attain the full speech emotion recognition (SER) accuracy and success rate. This paper proposes a novel algorithm, namely the deep ganitrus algorithm (DGA), to perceive the various categories of emotions from the input speech signal for better accuracy. DGA combines independent component analysis with fisher criterion for feature extraction and deep belief network with wake sleep for emotion classification. This algorithm is inspired by the elaeocarpus ganitrus (rudraksha seed), which has 1 to 21 lines. The single line bead is rarest to find, analogously finding a single emotion from the speech signal is also complex. The proposed DGA is experimentally verified on the Berlin database. Finally, the evaluation results were compared with the existing framework, and the test result accomplishes better recognition accuracy when compared with all other current algorithms.
Keywords: Speech signal, emotion recognition, deep analysis, deep ganitrus algorithm, recognition accuracy
DOI: 10.3233/JIFS-201491
Journal: Journal of Intelligent & Fuzzy Systems, vol. 43, no. 5, pp. 5353-5368, 2022
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]