ISSN ONLINE(2319-8753)PRINT(2347-6710)

A Dedicated System for Monitoring of Driver’s Fatigue

K.Subhashini Spurjeon1, Yogesh Bahindwar2
Department of Electronics and Telecommunication SSGI, BHILAI, Chhattisgarh, INDIA1
Department of Electronics and Telecommunication SSGI, BHILAI, Chhattisgarh, INDIA2
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology


This paper describes a real time system for analyzing video sequences of a driver and determining his/her level of attention. The proposed system deals with the computation of Percent of Eyelid Closure (PERCLOS) as an indicator to detect drowsiness. Driver’s fatigue and drowsiness are the major causes of traffic accidents on road. Monitoring the driver’s vigilance level, and issuing an alert when he/she is not paying enough attention to the road is a promising way to reduce the accidents caused by driver factors. Visual information is acquired using a specially designed solution combining a CCD video camera with an IR illumination system. The system is fully automatic and detects eye position, eye closure and recovers the gaze of the eyes. Experimental results using real images demonstrate the accuracy and robustness of the proposed solution. Thus it could become an important part in the development of the advanced safety vehicle. The driver’s facial information, especially the eye status is often believed to give some clues of his/her drowsiness level. In developing those driver monitoring systems, a reliable real-time driver eye detection method is one of the essential parts. This paper describes a real-time dedicated system for detecting driver-fatigue and drowsiness. This method break traditional way of drowsiness detection to make it real time, it utilizes face detection and eye detection to initialize the location of driver’s eyes; after that an object tracking method is used to keep track of the eyes; finally, we can identify drowsiness state of driver with PERCLOS by identified eye state.


Driver vigilance, eyelid movement, face position, percent eye closure (PERCLOS), visual fatigue behaviors


The increasing number of Traffic accidents due to a diminished driver’s vigilance level has become a serious problem for society. In Europe, statistics show that between 10% and 20% of all the Traffic accidents are due to drivers with a diminished vigilance level caused by fatigue. In the trucking industry, about a 60% of fatal truck accidents are caused to driver fatigue. It is the main cause for heavy truck crashes [1]. According to the U.S.A. National Highway Traffic Safety Administration (NHTSA), falling asleep while driving is responsible for at-least 100.000 automobile crashes annually. An annual average of roughly 40,000 nonfatal injuries and 1,550 fatalities results from this crashes. These crashes happen between the hours of midnight and 6 am, involve a single vehicle and a sober driver travelling alone, with the car leaving the roadway without any attempt to avoid the crash. Automatically detecting the visual attention level of drivers early enough to warm them about their lack of adequate visual attention due to fatigue may save a significant amount of lives and personal suffering. Therefore, it is important to explore the use of innovative technologies for solving the driver visual attention monitoring problem. Many efforts have been reported in the literature on developing non-intrusive real-time image-based fatigue monitoring systems [2]. Measuring fatigue in the workplace is a complex process. There are four kinds of measures that are typically used in measuring fatigue: physiological, behavioral, subjective self-report and performance measures [15]. An important physiological easure that has been studied to detect fatigue has been eye-movements. Several eye-movements were used to measure fatigue like blink rate, blink duration, long closure rate, blink amplitude, saccade rate and peak saccade velocity. An increasing popular method of detecting the presence of fatigue is the use of a measure called Percent of Eyelid Closure PERCLOS [15] .This measure attempts to detect the percentage of eye-lid closure as a measure of real time fatigue which is described in this paper .


Fatigue monitoring starts with extracting visual parameters that typically characterize a person’s level of vigilance. This is accomplished via a computer vision system. In this section, we discuss the computer vision system we developed to achieve this goal. Fig. provides an overview of our visual-cues ex- traction system for driver-fatigue monitoring. The system consists of two cameras: one wide-angle camera focusing on the face and another narrow-angle camera focusing on the eyes. The wide-angle camera monitors head movement and facial expression while the narrow-angle camera monitors eyelid and gaze movements. The system starts with eye detection and tracking. The goal of eye detection and tracking is for subsequent eyelid-movement monitoring, gaze determination, facial-orientation estimation, and facial-expression analysis. A robust, accurate, and real-time eye tracker is therefore crucial. In this research, we propose real-time robust methods for eye tracking under variable lighting conditions and facial orientations, based on combining the appearance-based methods and the active infrared (IR) illumination approach. Combining the respective strengths of different complementary techniques and overcoming their shortcomings, the proposed method uses active IR illumination to brighten subject’s faces to produce the bright pupil effect. The bright pupil effect and appearance of eyes (statistic distribution based on eye patterns) are utilized simultaneously for eyes’ detection and tracking. The latest technologies in pattern-classification recognition (the support vector machine) and in object tracking (the mean shift) are employed for eye detection and tracking based on eye appearance.
A. Image-Acquisition System
Image understanding of visual behaviors starts with image acquisition. The purpose of image acquisition is to acquire the video images of the driver’s face in real time. The use of an IR illuminator serves three purposes. First, it minimizes the impact of different ambient light conditions, therefore ensuring image quality under varying real-world conditions including poor illumination, day, and night. Second, it allows us to produce the bright/dark pupil effect, which constitutes the foundation for detection and tracking of the proposed visual cues. Third, since near IR is barely visible to the driver, this will minimize any interference with the driver’s driving. Specifically, our IR illuminator consists of two sets of IR light-emitting diodes (LEDs), distributed evenly and symmetrically along the circumference of two coplanar concentric rings. The center of both rings coincides with the camera optical axis. These IR LEDs will emit non coherent IR energy in the 800–900-nm region of the spectrum. The bright pupil image is produced when the inner ring of IR LEDs is turned on and the dark pupil image is produced when the outer ring is turned on, which is controlled via a video decoder. An example of the bright/dark pupils is given. Note that the glint, the small bright spot near the pupil, produced by cornea reflection of the IR light, appears on both the dark and bright pupil images.
B. Eye Detection
Eye-tracking starts with eyes detection. Fig. 1 gives a flowchart of the eye-detection procedure. Eye-detection is accomplished via pupil detection due to the use of active IR illumination. Specifically, to facilitate pupil detection, we have developed a circuitry to synchronize the inner and outer rings of LEDs with the even and odd fields of the interlaced image, respectively, so that they can be turned on and off alternately. The interlaced input image is de-interlaced via a video decoder, producing the even and odd field. While both images share the same background and external illumination, pupils in the even images look significantly brighter than in the odd images. To eliminate the background and reduce external light illumination, the odd image is subtracted from the even image, producing the difference image. A detection algorithm can be applied to successive frames of a video sequence to track a single target. The search area can be restricted around the last known position of the target, resulting in possibly large computational savings. This type of scheme introduces a feed-back loop, in which the result of the detection is used as input to the next detection process.
The detection algorithm can be described as following:
1. Initialize size and position of the search window.
2. Calculate the mass center
3. Adjust center of the window to mass center.
4. Repeat 2 and 3 until distance of the two centers (center of the window and the mass center) is less than some threshold
C. Eye-Tracking Algorithm
Base on the CAMSHIFT algorithm above. The trackingprocess is as following:
1. Get the eyes as initialized search window.
2. Convert color space to YCrCb, calculate histogram of Y and calculate back projection of the histogram.
3. Run CAMSHIFT to get the new search window.
4. In the next video frame, use the updated window as the initialized search window size and position, and return to step 2.
The detected eyes are then tracked frame to frame. We have developed the following algorithm for the eye tracking by combining the bright-pupil-based Kalman filter eye tracker with the mean shift eye tracker . While Kalman filtering accounts for the dynamics of the moving eyes, mean shift tracks eyes based on the appearance of the eyes. We call this two-stage eye tracking. After locating the eyes in the initial frames, the Kalman filtering is activated to track bright pupils. The Kalman filter pupil tracker works reasonably well under frontal face orientation with open eyes. However, it will fail if the pupils are not bright due to oblique face orientations, eye closures, or external illumination interferences. Kalman filter also fails when sudden head movement occurs, because the assumption of smooth head motion has been violated. mean shift tracking to augment Kalman filtering tracking to overcome this limitation. If Kalman filtering tracking fails in a frame, eye tracking based on mean shift will take over. Mean shift tracking is an appearance-based object-tracking method that tracks the eye regions according to the intensity statistical distributions of the eye regions and does not need bright pupils. It employs the mean shift analysis to identify an eye candidate region, which has the most similar appearance to the given eye model in terms of intensity distribution. Therefore, the mean shift eye tracking can track the eyes successfully under eye closure or oblique face orientations. Also, it is fast and handles noise well, but it does not have the capability to self-correction and, therefore, the errors tend to accumulate and propagate to subsequent frames as tracking progresses. Eventually, the tracker drifts away. To overcome these limitations with the mean shift tracker, we propose to combine the Kalman filter tracking with the mean shift tracking to overcome their respective limitations and to take advantage of their strengths. Specifically, we take the following measures. First, two channels (eye images with dark and bright pupils) are used to characterize the statistical distributions of the eyes. Second, the eye’s model is continuously updated by the eyes detected by the last Kalman filtering tracker to avoid error propagation with the mean shift tracker. Finally, the experimental determination of the optimal window size and quantization level for mean shift tracking further enhances the performance of our technique. The two trackers are activated alternately. The Kalman tracker is initiated first, assuming the presence of the bright pupils. When the bright pupils appear weak or disappear, the mean shift tracker is activated to take over the tracking. Mean shift tracking continues until the reappearance of the bright pupils, when the Kalman tracker takes over. Eye detection will be activated if the mean shift tracking fails. These two-stage eye trackers work together and complement each other. The robustness of the eye tracker is improved significantly The eye-detection and -tracking algorithm is tested with different subjects under different facial orientations and illuminations. These experiments reveal that our algorithm is more robust than the conventional Kalman-filter-based bright pupil tracker, specially for the closed and partially occluded eyes due to the face orientations. Even under strong external illuminations, we have achieved good results.


The system is currently running on PC Pentium IV (1,8 Ghz) in real time (50 fields-25 frames/s) with a resolution of 400x320 pixels. For testing its performance ten sequences, simulating some drowsiness behaviors, were recorded. These were achieved following the physiological rules explained in [4] to identify drowsiness in drivers. Test sequences were recorded from a car in a motorway using different users without glasses and with different light conditions. These images have been used as inputs of our algorithms, obtaining some quite robust, reliable and accurate results.


Through research presented in this paper, we developed an Non-intrusive prototype computer vision system for real-time monitoring of a driver’s vigilance. First, the necessary hardware and imaging algorithms are developed to simultaneously extract multiple visual cues that typically characterize a person’s level of fatigue. Then, a probabilistic framework is built to model fatigue, which systematically combines different visual cues and the relevant contextual information to produce a robust and consistent fatigue index. Our testing did not take into account persons with large amounts of facial hair or persons wearing glasses. We plan to do so in the future. To make eye detection reliable over long time periods, we will also add a tracking filter that exploits the tracking history. We would also like our system to be able to realize when only one eye is visible as in the cases of the driver turned 90 degrees to the right or left.


This work has been supported by My Project guide and all the faculty members who directly or indirectly help to make such a useful project.

Tables at a glance

Table icon Table icon
Table 1 Table 2

Figures at a glance

Figure Figure Figure
Figure 1 Figure 2 Figure 3


[1] M. R. Rosekind, E. L. Co, K. B. Gregory, and D. L. Miller, ―Crewfactors in flight operations XIII: A survey of fatigue factors in corporate/executive aviation operations,‖ National Aeronautics and Space Administration, Ames Research Center, Moffett Field, CA, NASA/TM-2000-209 610, 2000.
[2]. A. Yilmaz, et al., Automatic feature fetection and pose recovery for faces.ACCV2002, Melbourne, Australia, 2002.
[3] H. Saito, T. Ishiwaka, M. Sakata, and S. Okabayashi, ―Applications of driver’s line of sight to automobiles-what can driver’s eye tell,‖ in Proc. Vehicle Navigation Information Systems Conf., Yokohama, Japan, Aug.1994, pp. 21–26.
[4] H. Ueno, M. Kaneda, and M. Tsukino, ―Development of drowsiness detection system,‖ in Proc. Vehicle Navigation Information Systems Conf.,Yokohama, Japan, Aug. 1994, pp. 15–20.
[5] S. Boverie, J. M. Leqellec, and A. Hirl, ―Intelligent systems for video monitoring of vehicle cockpit,‖ in Proc. Int. Congr. Expo. ITS: Advanced Controls Vehicle Navigation Systems, 1998, pp. 1–5.
[6] M. K. et al., ―Development of a drowsiness warning system,‖ presented at the Proc. 11th Int. Conf. Enhanced Safety Vehicle, Munich, Germany,1994.
[7] R. Onken, ―Daisy, an adaptive knowledge-based driver monitoring and warning system,‖ in Proc. Vehicle Navigation Information SystemsConf., Yokohama, Japan, Aug. 1994, pp. 3–10.
[8] J. Feraric, M. Kopf, and R. Onken, ―Statistical versus neural bet ap-proach for driver behavior description and adaptive warning,‖ Proc. 11th Eur. Annual Manual, pp. 429–436, 1992.
[9] T. Ishii, M. Hirose, and H. Iwata, ―Automatic recognition of driver’s facial expression by image analysis,‖ J. Soc. Automotive Eng. Japan, vol. 41, no. 12, pp. 1398–1403, 1987.
[10] K. Yammamoto and S. Higuchi, ―Development of a drowsiness warning system,‖ J. Soc. Automotive Eng. Japan, vol. 46, no. 9, pp. 127–133, 1992. [11] D. Dinges and M. Mallis, ―Managing fatigue by drowsiness detection: Can technological promises be realized? in mana ging fatigue in transportation,‖ in Managing Fatigue in Transportation: Selected Papers from the 3rd Fatigue in Transportation Conference, Fremantle,Western Australia, L. R. Hartley, Ed.
Oxford, U.K.: Elsevier, 1998, pp. 209–229.
[12] S. Charlton and P. Baas, ―Fatigue and fitness for duty of New Zealand truck drivers,‖ presented at the Road Safety Conf., Wellington, New Zealand, 1998.
[13] T. Akerstedt and S. Folkard, ―The three-process model of alertness and its extension to performance, sleep latency and sleep length,‖ Chronobio. Int., vol. 14, no. 2, pp. 115–123, 1997.
[14] G. Belenky, T. Balkin, D. Redmond, H. Sing, M. Thomas, D. Thorne, and N. Wesensten, ―Sustained performance during continuous operations: The us armys sleep management system. In managing fatigue in transportation,‖ in Managing Fatigue in Transportation: Selected Papers from the 3rd Fatigue in Transportation Conference, Fremantle,Western Australia, L. R. Hartley, Ed. Oxford, U.K.: Elsevier, 1998.
[15] D. Dawson, N. Lamond, K. Donkin, and K. Reid, ―Quantitative simi-larity between the cognitive psychomotor performance decrement associated with sustained wakefulness and alcohol intoxication. In managing fatigue in transportation,‖ in Managing Fatigue in Transportation: Selected Papers from the 3rd Fatigue in Transportation Conference, Fremantle, Western Australia, L. R. Hartley, Ed. Oxford, U.K.: Elsevier,1998, pp. 231–256.
[16] Toshant Kumar, Chinmay Chandrakar, « Drivers Drowsiness Detection System (DDD), CIIT Journal of Digital Image Processing, Nov’2011
[17] P. Sherry et al., Fatigue Countermeasures in the Railroad Industry: Past and Current Developments, Published by Association of American Railroads, 2000.