ISSN ONLINE(2278-8875) PRINT (2320-3765)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Object Detection and Tracking in Dynamically Varying Environment

M.M.Sardeshmukh1, Dr.M.T.Kolte2, Dr.P.N.Chatur3
  1. Research Scholar , Dept. of E&Tc, Government College of Engineering., Amravati, Maharashtra, India
  2. Professor and Head, Dept. of E&TC, MIT College of Engineering, Pune, Maharashtra, India
  3. Associate Professor and Head, Dept. of E&Tc, Government College of Engineering., Amravati, Maharashtra, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

Abstract

Object detection and tracking is one of the important aspect in many video surveillance applications like human activity recognition, patient monitoring, traffic control etc. This task becomes more difficult and challenging in varying illumination, occlusion , outdoor scenes and cluttered environment. We proposed an algorithm which continuously update the background and improves the object detection. Detected object is tracked in the next frames of video and then finally it is classified. The videos which are used contains mainly three types of object viz. Single person, group of persons and vehicle. This type of classification is useful in surveillance systems at different places such as shopping mall, parking slot etc.

Keywords

Cognitive Object detection, object tracking, dynamically varying environment .

INTRODUCTION

Security of human lives and property has always been a major concern for civilization for several centuries. In modern civilization, the threats of theft, accidents, terrorists’ attacks and riots are ever increasing. Due to the high amount of useful information that can be extracted from a video sequence, video surveillance has come up as an effective tool to forestall these security problems. The automated security market is growing at a constant and high rate that is expected to sustain for decades. Video surveillance is one of the fastest growing sectors in the security market due to its wide range of potential applications, such as a intruder detection for shopping mall and important buildings , traffic surveillance in cities and detection of military targets, recognition of violent/dangerous behaviors (eg. in buildings, lifts) etc Visual surveillance in dynamic scenes, especially for humans and vehicles, is currently one of the most active research topics in computer vision The most desirable qualities of a video surveillance system are (a) robust operation in real world scenarios, characterized by sudden or gradual changes in the input statistics and (b) intelligent analysis of video to assist the operators in scene analysis and event classification

Overview of Automated Visual Surveillance System

The general framework of an automatic video surveillance system is shown in Figure1. Video cameras are connected to a video processing unit to extract high level information identified with alert situation. This processing unit could be connected throughout a network to a control and visualization center that manages, for example, alerts. Another important component is a video database and retrieval tool where selected video segments, video objects, and related contents can be stored and inquired. In , a good description of video object processing in surveillance framework is presented. The main video processing stages include background modeling, object segmentation, object tracking, behaviors and activity analysis. In multi camera scenario, fusion of information is needed, which can take place at any level of processing. Also these cameras may be of different modality like thermal infrared, near infrared, visible color camera etc so that multi spectral video of the same scene can be captured and the redundant information may be used to improve the robustness of the system against dynamic changes in environmental conditions. Prithvi Banarjee and Somnath Sengupta published the paper which can be abstracted as, An Automated Video Surveillance system is presented in this paper. The system aims at tracking an object in motion and classifying it as a Human or Non-Human entity, which would help in subsequent human activity analysis. Daw-Tung Lin and Kai-Yung Huang presented the paper on the Collaborative Pedestrian Tracking and Data Fusion with Multiple Cameras which can be abstracted as, the work presents a framework of a collaborative multiple-camera tracking system for seamlessly object tracking across fixed cameras in overlapping and non-overlapping field of views (FOV). Jianpeng Zhou and Jack Hoang presented a paper in I3DVR international Inc, which is Real time robust human detection and tracking system. Which can be abstracted as, a real time robust human detection and tracking system for video surveillance which can be used in varying environments. This system consists of human detection, human tracking and false object detection. The human detection utilizes the background subtraction to segment the blob and use codebook to classify human being from other objects. The optimal design algorithm of the codebook is proposed. The tracking is performed at two levels: human classification and individual tracking

SYSTEM MODEL AND ASSUMPTIONS

The Auto detection and tracking system has two major subsystems
1) Detection of the human / object
2) Tracking the detected object
Going through the literature the detection techniques are broadly classified in to two major categories 1)Temporal Differ 2) Background Subtraction

Background Modeling

Background is typically static and seldom changes dramatically. Nevertheless, some background subtraction techniques have many problems, especially when used outdoors. For instance, when sunshine is covered by clouds, the change in illumination is considerable. One of the successful solutions to these problems is to use a multi-color background model per pixel. Stauffer and Grimson introduced a method for modeling each background pixel via a mixture of M Gaussian distributions. However, computational cost of this method is high. Therefore, this paper presents a new method inspired by Gaussian mixture model. First, the stability S(x; y) of each pixel will be determined. The stability of pixel (x; y) increases by one if the difference in pixel value between the current image and previous two images is less than a threshold Tbackground. Then, the background pixel value B(x; y) of location (x; y) will be computed as the mean value of all similar pixels in this period. Next, the background information is updated based on three conditions including (1) pure background, (2) illumination change, and (3) static object. First, a pure background test will be carried out, and then an illumination change test and finally a static object test will be performed.
image

FLOWCHART AND ALGORITHM

image

Algorithm

1. Start
2. Read a video input.
3. Extract the frames from the video.
4. Convert each frame to gray scale.
5. Consider initial 6 frames as learning frames for threshold calculation.
6. Calculate threshold values for background.
7. Consider next 10 frames for object detection.
8. Compare incoming frame pixels with threshold values
9. If more than 40% of the pixels are changed then it is temporary illumination change, consider next frame else Extract the foreground object from the entire frame.
10. Deduce boundary for foreground object.
11. Calculate height, width and height to width ratio of the object.
12. Find centroid, bottom leftmost and bottom rightmost point.
13. Store the difference between bottom rightmost and bottom leftmost points.
14. If height to width ratio is less than 1 then vehicle is detected.
15. If height to width ratio is greater than 1 and the difference between leftmost and rightmost point shows period nature then pedestrian is detected.
16. Display the result.
17. Stop

RESULT AND DISCUSSION

The experimentation is carried out with several videos specially outdoor scenes sample results are shown below for detection of pedestrian group of pedestrian and vehicle. Background subtraction is used to extract the foreground object from the frame. To eliminate the frequent illumination changes background updating technique is used. Using features like height to width ratio object classification is done. The periodic nature of the movement of legs of pedestrian is used to differentiate it from vehicle. The main advantage of the technique is the simplicity in idea and algorithm. It is observed that the algorithm used classifies the pedestrian group of pedestrian and vehicle in a outdoor video.
Fig. 1 b shows the extracted images from video. The background is continuously upgraded and subtracted from the current image which separates the foreground and background effectively. Morphological operations are used to clean the object image. The fig 1 e shos the white count in each column which is used as one of feature in object classification. The edges are found for the detected object which is shown in fig 1 f. Centroid point is obtained for the detected object which is useful in tracking.
Lastly based on the features extracted object is classified as single person, multiple persons or vehicle.

Figures at a glance

Figure
Figure 1

References