Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Wang, Bing | Huang, Xianglin; * | Cao, Gang | Yang, Lifang | Wei, Xiaolong | Tao, Zhulin
Affiliations: State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing, China
Correspondence: [*] Corresponding author. Xianglin Huang, State Key Laboratory of Media Convergence and Communication, Communication University of China, Dingfuzhuang No. 1, Chaoyang District, Beijing, 100024, China. E-mail: [email protected].
Abstract: Many micro-video related applications, such as personalized location recommendation and micro-video verification, can be benefited greatly from the venue information. Most existing works focus on integrating the information from multi-modal for exact venue category recognition. It is important to make full use of the information from different modalities. However, the performance may be limited by the lacked acoustic modality or textual descriptions in uploaded micro-videos. Therefore, in this paper visual modality is explored as the only modality according to its rich and indispensable semantic information. To this end, a hybrid-attention and frame difference enhanced network (HAFDN) is proposed to generate the comprehensive venue representation. Such network mainly contains two parallel branches: content and motion branches. Specifically, in the content branch, a domain-adaptive CNN model combined with temporal shift module (TSM) is employed to extract discriminative visual features. Then, a novel hybrid attention module (HAM) is introduced to enhance extracted features via three attention mechanisms. In HAM, channel attention, local and global spatial attention mechanisms are used to capture salient visual information from different views. In addition, convolutional Long Short-Term Memory (convLSTM) is enforced after HAM to better encode the long spatial-temporal dependency. A difference-enhanced module parallel with HAM is devised to learn the content variations among adjacent frames, which is usually ignored in prior works. Moreover, in the motion branch, 3D-CNNs and LSTM are used to capture movement variation as a supplement of content branch in a different form. Finally, the features from two branches are fused to generate robust video-level representations for predicting venue categories. Extensive experimental results on public datasets verify the effectiveness of the proposed micro-video venue recognition scheme. The source code is available at https://github.com/hs8945/HAFDN.
Keywords: Micro-video venue recognition, robust visual features, hybrid attention module, difference enhanced module
DOI: 10.3233/JIFS-213191
Journal: Journal of Intelligent & Fuzzy Systems, vol. 43, no. 3, pp. 3337-3353, 2022
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]