ISSN ONLINE(2278-8875) PRINT (2320-3765)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Real Time Driver Drowsiness Detection System

A.N.Shewale1, Pranita Chaudhari2
  1. Professor, Dept. of ECE, SGDCOE, Jalgaon, Maharashtra, India
  2. PG Student, Dept. of ECE, SGDCOE, Jalgaon, Maharashtra, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

Abstract

Nowadays, the main reason of road accidents is the drowsiness of driver. In this paper, we are focusing on designing of a system that will monitor the open or close state of drivers eyes in real time. Video camera is placed on a car desk in front of driver for monitoring eyes state of the driver. This system in turn detects drowsiness of driver. The system uses Viola Jones method which detects objects in the images i.e. detects face and eye localization is done by Haar like features. If eyes remain closed for the successive frames, system gives indication as "drowsy driver".

Keywords

Face detection, Eye detection and Haar-like features etc.

INTRODUCTION

Currently, transport systems are an essential part of human activities. We all can be victim of drowsiness while driving, simply after a night's sleep too short, altered physical condition or during long journeys. The sensation of sleep reduces the driver's level of vigilance producing dangerous situations and increases the probability of an occurrence of accidents. Driver drowsiness and fatigue are among the important causes of road accidents. Every year, they increase the number of deaths and fatalities injuries globally. In this context, it is important to use new technologies to design and build systems that are able to monitor drivers and to measure their level of attention during the entire process of driving
In this paper, it proposes new development of a driver drowsiness detection system. The focus will be placed on designing a system that will accurately monitor the open or closed state of the driver’s eyes in real-time. By monitoring the eyes, it is believed that the symptoms of driver fatigue can be detected early enough to avoid a car accident. Detection of fatigue involves a sequence of images of a face, and the observation of eye movements.

RELATED WORK

Drowsiness detection can be generally done by various techniques based on characteristics as: sensing of physiological characteristics, sensing of driver operation, sensing of vehicle response, monitoring the response of driver. The authors in paper discuss that out of these, using physiological characteristics is the best technique. This technique is implemented by measuring changes in physiological signals, such as brain waves, heart rate, and eye blinking; and by measuring physical changes such as sagging posture, leaning of the driver’s head and the open/closed states of the eyes. The first approach is most accurate since sensing electrodes would have to be attached directly onto the driver’s body. The second approach is well suited for real world driving conditions since it can be non-intrusive by using optical sensors of video cameras to detect changes[1].

DEVELOPEMENT OF SYSTEM

The images of the drowsy driver is captured from the camera which is installed in front of the driver on car board. The overall flowchart for drowsiness detection system is shown in Fig-1.
image

FACE DETECTION

Face detection is done by Viola-Jones method developed by Paul Viola and Michael Jones. This method focuses on detecting objects in images. Object detection is done by Simple rectangular features, called Haar-like features, an integral Image for rapid feature detection , AdaBoost machine-learning method and cascaded classifier to combine many features efficiently[2]. These methods are described as follows :

A. HAAR-LIKE FEATURES:

Haar-like feature considers adjacent rectangular regions at a specific location in a detection window. It sums up the pixel intensities in each region and calculates the difference between these sums. This difference is then used to categorize subsections of an image. The key advantage of a Haar-like feature over most other features is its calculation speed. Due to the use of integral images, a Haar-like feature of any size can be calculated in constant time (approximately 60 microprocessor instructions for a 2-rectangle feature).
image

B. INTEGRAL IMAGE

The simple rectangular features of an image are calculated using an intermediate representation of an image, called the integral image. The integral images are an array which consists of sums of the pixels’ intensity values located directly to the left of a pixel and directly above the pixel at location (x,y) inclusive. Here, A[x,y] is the original image and Ai[x,y] is the integral image[3].
image

C. AdaBoost

Adaboost, nothing but "Adaptive Boosting ", is a machine learning method given by Yoav Freund and Schapire in 2003. It can be used with many other types of learning algorithms to improve their performance. Adaboost takes a number of positive and negative images features and training sets, The machine creates a set of weak classifiers of Haar-like features. It selects a set of weak classifiers to combine and that assigns lesser weights to good features whereas larger weights to poor features. This weighted combination gives strong classifier.

D. CASCADED CLASSIFIER

The cascade classifier consists of number of stages, where each stage is a collection of weak learners. The weak learners are simple classifiers known as decision stumps. Boosting is used to train the classifiers. It provides the ability to train a highly accurate classifier by taking a weighted average of the decisions made by the weak learners.
image
Each stage of the classifier shows the region defined by the current location of the sliding window as either positive or negative. Positive indicates an object was found and negative indicates no object. If the label is negative, the classification of this region is complete, and the detector shifts the window to the next location. If the label is positive, the classifier passes the region to the next stage. The detector reports an object found at the current window location when the final stage classifies the region as positive. It is used to eliminate less likely regions quickly so that no more processing is needed. Hence, speeding up the overall algorithm.

B. EYE DETECTION

Images or the real time video is captured from the camera installed in front of the driver's face. This video is converted into number of frames. OpenCV face haar-classifier is loaded. Each frame is compared with the pre-defined features of the haar-classifiers. When the features are matched the face is detected and a square is drawn around the face. Using feature extraction we estimate the position of the eyes. By comparing with the OpenCV eye-haar classifier, the eyes are detected and squares are drawn around left and right eye.

IMPLEMENTATION

We have used the Haar training applications in OpenCV to detect the face and eyes. This creates a classifier given a set of positive and negative samples. OpenCV is an open source computer vision library. It is designed for computational efficiency and with a strong focus on real time applications. It helps to build vision applications quickly and easily. OpenCV satisfies the low processing power and high speed requirements of our application. Also we are using Emgu CV. It is a cross platform .Net wrapper to the OpenCV image processing library[4]. It allows OpenCV functions to be called from .NET compatible languages such as C#, VB, VC++, IronPython etc. To detect human eyes, face has to be detected initially. This is done using OpenCV face haarcascade classifier. Once the face is detected, the location of the eyes is estimated and eye detection is done using eye Haar-cascade classifier. Hence using the open CV, face and eyes are detected accurately and displayed on the monitor as shown in the Fig-4(a). The larger square indicates the face while smaller squares indicate the eyes. When eyes are closed, face and eyes are detected and shows the indication as '' Driver Drowsy" as shown in Fig-4(b) and Fig-4(c).

EXPERIMENTAL RESULTS

We have used Open CV as a platform to develop a code for eye detection in real time. The code is then implemented on system installed with Open CV software. To detect human eyes, face has to be detected initially. This is done by OpenCV face haar cascade classifier.
Figure 4(a) shows face and eyes are detected initially. It gives the information about face detected or not giving indication on right side. Red square indicates face while small blue square indicates eyes.
image
Once face and eyes are detected , it is checking status of eyes i.e. open or closed state of the eyes. If both eyes remain closed for successive frames, it indicates that the driver is drowsy and gives the warning signal indicating "driver drowsy" as shown in fig.4(b).
image
Fig.4: (b) When both eyes are closed alerting the driver by showing the warning message '' Driver Drowsy ''
If one of the eye is detected as closed for more successive frames , then also it gives the warning signal of drowssiness as shown in fig.4(c).
image
Fig.4: (c) When one eye is closed alerting the driver by showing the warning message '' Driver Drowsy ''.

CONCLUSIONS

The system proposed in this analysis provides accurate detection of driver fatigue. This application can be implemented in the real time to reduce traffic accidents rate due to drowsy drivers and it can also help drivers to stay awake when driving by giving a warning when the driver is sleepy. The future research topics are as follows: System can be improved in many dimensions some of which are discussed including the other parameters of the driver like yawning etc. should be included to get the better vigilance status of the driver.

References

  1. Weirwille, W.W. , “Overview of Research on Driver Drowsiness Definition and Driver Drowsiness Detection,” 14th International Technical Conference on Enhanced Safety of Vehicles,1994.
  2. P. Viola and M. J. Jones, "Robust Real Time Face Detection", International Journal Of Computer Vision, vol.57,no.2,pp. 137-154, 2004.
  3. P. Viola and M. Jones, “Rapid object detection using a boosted classifier of simple features,” in Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 511–518,2001.
  4. Wiki, E. (n.d.). EmguCV, from http://www.emgu.com/wiki/index.php/Main_Page, Retrieved January 20, 2012,.