Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Huang, Danyanga; b | Zhou, Zhihenga; b; * | Deng, Minga; b | Li, Zhihaoa; b
Affiliations: [a] School of Electronic and Information Engineering, South China University of Technology, Guangzhou, China | [b] Key Laboratory of Big Data and Intelligent Robot, South China University of Technology, Ministry of Education, China
Correspondence: [*] Corresponding author. Zhiheng Zhou, School of Electronic and Information Engineering, South China University of Technology, Guangzhou, 510640, China. E-mail: [email protected].
Abstract: Detecting vehicle at night is critical to both assistant driving systems and autonomous driving systems. In this paper, we propose a deep network scheme assisted by light information with good generalization to detect vehicle at night. Our approach is divided into two branches, the object stream and the pixel stream. The object stream generates a batch of bounding boxes, and the pixel stream utilizes the vehicle light information to calibrate the bounding boxes of the object stream. In the object stream, we propose a new structure, Direction Attention Pooling (DAP), to improve the accuracy of the prior boxes. DAP leads into attention mechanism. The feature maps obtained from backbone network is divided into two branches. One branch obtains direction perception information through IRNN layer, and the other branch learns attention weights. The weights are multiplied with the direction perception features in an element-wise manner. In the pixel stream, we propose a corner localization algorithm based on Bayes to get more accurate corners with the vehicle light pixels. The locations of the corners are considered as a discrete random variable. When the mask of the object is known, solving the probability distribution of the corner of the object is the next step. The corners with the highest probability is the correct corner. On the nighttime vehicle detection datasets CHUK and SYSU, our method achieves the accuracy of 97.2% and 96.86%, which outperforms other state-of-the-art methods by at least 0.31% and 0.34%.
Keywords: Nighttime vehicle detection, advanced driver-assistance systems, attention mechanism, deep learning
DOI: 10.3233/JIFS-202676
Journal: Journal of Intelligent & Fuzzy Systems, vol. 41, no. 1, pp. 783-801, 2021
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]