Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Ye, Qing* | Zhong, Haoxin | Qu, Chang | Zhang, Yongmei
Affiliations: School of Information Science and Technology, North China University of Technology, Beijing, China
Correspondence: [*] Corresponding author: Qing Ye, School of Information Science and Technology, North China University of Technology, Beijing 100144, China. E-mail: [email protected].
Abstract: Human activity recognition is a key technology in intelligent video surveillance and an important research direction in the field of computer vision. However, the complexity of human interaction features and the differences in motion characteristics at different time periods have always existed. In this paper, a human interaction recognition algorithm based on parallel multi-feature fusion network is proposed. First of all, in view of the different amount of information provided by the different time periods of action, an improved time-phased video down sampling method based on Gaussian model is proposed. Second, the Inception module uses different scale convolution kernels for feature extraction. It can improve network performance and reduce the amount of network parameters at the same time. The ResNet module mitigates degradation problem due to increased depth of neural networks and achieves higher classification accuracy. The amount of information provided in the motion video in different stages of motion time is also different. Therefore, we combine the advantages of the Inception network and ResNet to extract feature information, and then we integrate the extracted features. After the extracted features are merged, the training is continued to realize parallel connection of the multi-feature neural network. In this paper, experiments are carried out on the UT dataset. Compared with the traditional activity recognition algorithm, this method can accomplish the recognition tasks of six kinds of interactive actions in a better way, and its accuracy rate reaches 88.9%.
Keywords: Parallel multi-feature fusion network, Gaussian model downsampling, human interaction recognition
DOI: 10.3233/IDA-205217
Journal: Intelligent Data Analysis, vol. 25, no. 4, pp. 809-823, 2021
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]