Stereo Vision based Forward Collision Warning and Avoidance System Yunhee LeeByungjoo KimHogi JungPaljoo Yoon Central R&D Center, MANDO Corporation, 413-5, Gomae-Ri, Gibeung-Eub, Youngin-Si, Kyonggi-Do, Korea. Abstract : This paper describes stereo vision based forward collision warning and avoidance system which consists of forward obstacle monitoring sensor and brake actuator for collision avoidance. Several tools can be used for forward obstacle monitoring system : radar, laser, camera, sensor fusion. The proposed system in this paper uses stereo vision. Stereo vision method can acquire distance information that cannot be acquired by using single camera. The proposed system acquires ROI(Region Of Interest) through ego-lane detection. Therefore, the proposed system has advantage of detecting ego-lane obstacle and reduction of computational time by removing no interest region. In addition, the proposed system uses feature-based stereo matching. Feature-based stereo matching provides relatively insensitive to noise and low computational time. The proposed system uses adaptive threshold for disparity histogram since disparity quantity depends on distance. Also the proposed system uses active braking for collision avoidance, and the effectiveness of active braking using ESC has been proven for avoiding dangerous situations in case of imminent rear end collision. Key words : Forward Collision Avoidance( ), Stereo Vision( ), Lane Recognition( ), Active Braking( )
Fig. 1 System block diagram
Fig. 2 Lane mark image comparison : input image, bird's eye view image S( x, y) = H W y= 1 x= 1 (2B( x, y) B( x w, y) B( x + w, y)) Fig. 3 Lane detection: template matching result, egolane detection result Fig. 4 ROI establishment: ego-lane projection result to input image, ROI detection result
1 if g( i) g( x) > + T d ( i) = 2 if g( i) g( x) < T 0 else 3 i d( x) = 3 d( i) i= 0 Fig. 5 Relation between center pixel and neighborhood pixel: neighborhood pixel index, encoding neighborhood pixel (c) (d) (e) (f) Fig. 6 Result of fixed T:, (d) input images for light condition;, (e) feature detection result of T=5; (c), (f) feature detection result of T=35 Fig. 7 Adaptive feature extraction result : result of Fig. 6, result of Fig. 6(d)
1 1 1 S class ( xys,, ) = f( Lclass ( x+ uy, + v), Rclass ( x+ u+ s, y+ v)) 3 3u= 1 v= 1 1, a = b f ( a, b) = 0, a b 2 2 2 ( Xleft( x+ u, y+ v) Xright( x+ u+ s, y+ v)) 1 u= 2v= 2 Scolor( x, y, s) = 1 256 5 5 X:[R,G,B] S( x, y, s) = S ( x, y, s) S ( x, y, s) color class (c) Fig. 8 Stereo matching result : left input image, right input image, (c) stereo matching result
Fig. 10 Adaptive threshold 1 w( i) = ( DH ( i) T( i)) ( MD Disparity( i)) MH MD Fig. 11 Verification of vehicle candidates Fig. 12 Controller Block diagram
R = R + R decision delay braking 1 2 R delay = Vrelative Tdelay + a relative T delay 2 R braking 2 Vrelative = 2a control Fig. 13 Conceptual figure of deceleration control (c) (d) Fig. 14 Experimental envelopment: stereo camera, laser radar, (c) winter test (China) track, (d) balloon car
(c) (d) Fig. 15 Compare of ROI methods in case of road with curvature: Fixed ROI, wrongly detected vehicle, (c) lane ROI, (d) correctly detected vehicle (c) (d) Fig. 16 Comparison of two threshold methods: wrong detection result, constant threshold, (c) correct detection result, (d) adaptive threshold 80 70 60 50 40 30 20 10 0 0 20 40 60 80 100 120 140 160 Fig. 17 Comparison of laser radar data and stereo vision result
(c) Fig. 18 Active braking result on low-µ surface: Relative distance, ego-vehicle velocity, (c) brake pressure References