ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Driver’s drowsiness detection using eye status to improve the road safety

Mr. Susanta Podder1 Mrs. Sunita Roy2
  1. Plant OH&S Head – ACC Limited, West Bengal, India
  2. Ph.D. scholar in the Dept. of Computer Science & Engineering, University of Calcutta, Kolkata, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering

Abstract

In recent years, we have used many technologies to detect the drowsiness of a driver in the field of accident avoidance system [1]-[3]. To develop such a system we need to install some hardware components like camera inside the car, which can capture the image of the driver at a fixed interval, and an alarm system, which will alert the driver after detecting his/her level of drowsiness. Now apart from these hardware components, we need a software part also, which can detect the level of drowsiness of the driver and is the main concern of our paper. It is believed that, driver’s fatigue can be easily detected by monitoring the eye status [5] [7], which is either ‘open’ or ‘closed’. In this paper, we develop a drowsiness detection system that will accurately monitor the open or closed state of the driver’s eyes in real-time.

Keywords

Driver’s drowsiness, eye tracking, face detection, fatigue, accident avoidance system

INTRODUCTION

In case of vehicle accidents, driver fatigue is an important factor. Recent statistical report shows that that annually 1,200 deaths and 76,000 injuries can be attributed to fatigue related crashes [1]. Hence development of technologies to detect the drowsiness property of the driver is a major challenge in the field of accident avoidance systems and also is an active research area. In literature, there are four types of techniques that are used for detecting drowsiness of drivers.

A. SENSING OF PHYSIOLOGICAL CHARACTERSTICS

Among all these four methods, the most accurate method is based on human physiological phenomena [1]. This technique is implemented in two ways: measuring changes in physiological signals, such as brain waves, heart rate, and eye blinking; and measuring physical changes such as sagging posture, leaning of the driver’s head and the open/closed states of the eyes [1]. Though this technique is most accurate, it is not realistic, since sensing electrodes would have to be attached directly onto the driver’s body, and hence be annoying and distracting to the driver. In addition, long time driving would result in perspiration on the sensors, diminishing their ability to monitor accurately.

B. SENSING OF DRIVER OPERATION

The second technique is well suited for real world driving conditions since it can be non-intrusive by using optical sensors of video cameras to detect changes.
In this type of driver’s drowsiness detection system we first capture the face image of the driver using a camera located inside the car. After that we have to segment only the face region and excluded the background portion. We may use a very simple approach like color based model to localize a face region. After the localization of the face we need to extract the eye region. We choose the eye region as our decision parameter because the eye region is very dynamic in nature and the drowsiness of a person can only be determined by looking at the eyes [7]. If the eye is open, the situation is normal and if the eye is closed we should generate an alarm signal to alert the driver.

C. SENSING OF VEHICLE RESPONSE

Driver operation and vehicle behavior can be implemented by monitoring the steering wheel movement, accelerator or brake patterns, vehicle speed, lateral acceleration, and lateral displacement [3]. These too are non-intrusive ways of detecting drowsiness, but are limited to vehicle type and driver conditions.

D. MONITORING THE RESPONSE OF DRIVER

The final technique for detecting drowsiness is by monitoring the response of the driver [6]. This involves periodically requesting the driver to send a response to the system to indicate alertness. The problem with this technique is that it will eventually become tiresome and annoying to the driver.

VARIOUS STAGES INVOLVED IN DROWSINESS DETECTION SYSTEM

In the following section we will describe various stages involved in the drowsiness detection system.

A. IMAGE ACQUISITION

Using a web camera installed inside the car we can acquire the image of the driver. Though the camera generates a video clip, we need to apply our algorithm on every frame of the video stream. In this paper we will only discuss the processing mechanism performed on a single frame.

B. DETECT THE FACE REGION

In this stage we detect the region containing the face of the driver. A variety of techniques are available, but we use a very simple one based on color space model. There are there basis color spaces RGB, HSV and YCbCr [11]. We can use any one of them, but among them HSV and YCbCr gives the best result. In case of HSV color model, if we transform an image onto this color space model, a pixel whose H (Hue) and S (Saturation) components satisfied the following condition would be treated as a skin color pixel.
0 <= H <= 0.25; 0.15 <= S <= 0.9 ------- (1)
Similarly, if we transform an image onto YCbCr color model, a skin color pixel’s Y, Cb and Cr components should satisfy the following equation.
135 < Y < 145 ; 100 < Cb < 110 ; 140 < Cr < 150 -----------(2)

C. DETECT THE EYE REGION

After detecting the face region we should detect the eye region because in our approach, we use the eyes as our decision parameter to determine the drowsiness of the driver [4] [5] [7] [8]. Detection of eye region is also possible using various methods. In our approach, we use the concept of intensity change, because eyes are the darkest part of the face. For this we need to calculate the average intensity for each y – coordinate. This is called the horizontal average, since the averages are taken among the horizontal values.
The valleys (dips) in the plot of the horizontal values indicate intensity changes. When the horizontal values were initially plotted, it was found that there were many small valleys, which do not represent intensity changes, but result from small differences in the averages. To correct this, a smoothing algorithm was implemented. The smoothing algorithm eliminated and small changes, resulting in a more smooth, clean graph.
After obtaining the horizontal average data, the next step is to find the most significant valleys, which will indicate the eye area. Assuming that the person has a uniform forehead (i.e.; little hair covering the forehead), this is based on the notion that from the top of the face, moving down, the first intensity change is the eyebrow, and the next change is the upper edge of the eye, as shown below.
The valleys are found by finding the change in slope from negative to positive. And peaks are found by a change in slope from positive to negative. The size of the valley is determined by finding the distance between the peak and the valley. Once all the valleys are found, they are sorted by their size.

D. DETECTION OF VERTICAL EYE POSITION

The first largest valley with the lowest y – coordinate is the eyebrow, and the second largest valley with the next lowest y-coordinate is the eye. This is shown in Figures 4a and 4b.
This process is done for the left and right side of the face separately, and then the found eye areas of the left and right side are compared to check whether the eyes are found correctly. Calculating the left side means taking the averages from the left edge to the center of the face, and similarly for the right side of the face. The reason for doing the two sides separately is because when the driver’s head is tilted the horizontal averages are not accurate. For example if the head is tilted to the right, the horizontal average of the eyebrow area will be of the left eyebrow, and possibly the right hand side of the forehead.

E. EXTRACT THE EYE REGION

In this stage, we need to extract the eye region from the whole face because we have to examine the eye to determine whether a person feel drowsy or not.

F. DETERMINING THE STATE OF THE EYES

The state of the eyes (whether it is open or closed) is determined by distance between the first two intensity changes found in the above step. When the eyes are closed, the distance between the y – coordinates of the intensity changes is larger if compared to when the eyes are open. This is shown in Figure 5.
The limitation to this is if the driver moves their face closer to or further from the camera. If this occurs, the distances will vary, since the number of pixels the face takes up varies, as seen below. Because of this limitation, the system developed assumes that the driver’s face stays almost the same distance from the camera at all times.

G. DROWSINESS DETECTION

When there are 5 consecutive frames find the eye closed, then the alarm is activated, and a driver is alerted to wake up. Consecutive number of closed frames is needed to avoid including instances of eye closure due to blinking. Criteria for judging the alertness level on the basis of eye closure count is based on the results found in a previous study [1].

EXPERIMENTAL RESULTS

CONCLUSION

In this paper we have discussed about the drowsiness detection process, which examine the state of the eyes (open or closed) and based on that status we can determine the driver’s drowsiness state. On the other hand, the movement of mouth or head can also define drowsiness property. In case of mouth, if a person feel drowsy or sleepy, then the mouth will be open very frequently. In case of head, a drowsy person always wants to bend his/her head towards the floor of the car. So further research can be possible considering these constraints.
 

Figures at a glance

Figure 1 Figure 2 Figure 3
Figure 1 Figure 2 Figure 3
 

References