ISSN ONLINE(2319-8753)PRINT(2347-6710)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Analysis of Obstacle Detection Technologies used in Mobile Robots

Kausalya. P, S. Poonkuntran
  1. Post-graduate Scholar, Computer Science and Engineering, Velammal College of Engineering and Technology, Viraganoor, Madurai, Tamil Nadu, India
  2. Professor, Computer Science and Engineering, Velammal College of Engineering and Technology, Viraganoor, Madurai, Tamil Nadu, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology

Abstract

The purpose of this paper is to present a survey of general robotic systems and performance analysis. This approach permits the detection of unknown obstacles simultaneously with the steering of the mobile robot to avoid collisions and advancing towards the target. Highway obstacle detection is a challenging problem. Highways present an unknown and dynamic environment with real-time constraints. In addition, the high speeds of travel, force a system to detect objects at long ranges. While a variety of competing methods have been proposed for on-road obstacle detection most of the work has focused on detecting large objects, especially other vehicles. This paper describes various technique used in obstacle detection of mobile robot.

Keywords

Mobile robots, obstacle detection, dynamic environment

INTRODUCTION

Robotics is diverse area of study with applications in numerous fields and aspects of society. Properly designed robotic systems that take into account how they benefit human users make use of multiple methods of evaluation. For the purpose of this paper Robotics will be defined as a mechanical system controlled by embedded or other computer systems with the purpose of simplifying human tasks. A more general and commonly employed method for obstacle avoidance is based on edge detection. Realtime obstacle avoidance is one of the key issues to successful applications of mobile robot systems. A disadvantage with obstacle avoidance based on edge detecting is the need of the robot to stop in front of an obstacle in order to allow for a more accurate measurement. All mobile robots feature some kind of collision avoidance, ranging from primitive algorithms that detect an obstacle and stop the robot short of it in order to avoid a collision, through sophisticated algorithms, that enable the robot to detour obstacles. The latter algorithms are much more complex, since they involve not only the detection of an obstacle, but also some kind of quantitative measurements concerning the obstacle's dimensions. Once these have been determined, the obstacle avoidance algorithm needs to steer the robot around the obstacle and resume motion toward the original target. In this paper describes an obstacle detection method of various technique and describes an critical finding and inference

II.OBSTACLE DETECTION TECHNOLOGIES

I. AN OBSTACLE DETECTION METHOD BY FUSION OF RADAR AND MOTION STEREO
 Technique: Motion Stereo Technique
 Components: Millimeter-wave radar, Single video camera, vision sensor
In order to avoid collision with an object that blocks the course of a vehicle, measuring the distance to it and detecting positions of its side boundaries, are necessary. In this paper, an object detection method achieved by the fusion of millimeter-wave radar and a single video camera is proposed. This method has the least expensive solution because at least one camera is necessary for lane marking detection. In the method, the distance is measured by the radar, and the boundaries are found from an image sequence, based on motion stereo technique with help of the distance measured by the radar. Since the method does not depend on the appearance of objects, it is capable of detecting not only an automobile but also other objects. In the paper, both a stationary and a moving object were detected and a pedestrian as well as a vehicle was detected. Performed, based on the motion of the object First, objects are detected with the radar. Feature points whose motion is easy to track are selected and tracked frame by frame in an image sequence. Then regions in the image, where the distances match with the distances measured by the radar, are detected. The distance estimation is derived from the motion in the image, by a motion stereo technique. Object detection methods that make use of motion in an image sequence, which is captured by a forward-viewing camera

A. Critical Findings

Radar can measure the accurate distance to an object, and has robustness against bad weather, for example, rain and fog; however, it does not have enough lateral resolution to find its boundaries object is detected by radar, a camera is necessary for the detection of lane markings to establish whether the detected object lies on the course of the vehicle stereovision, is much less accurate than radar, and it needs expensive lenses and exact and rigid alignment of the two cameras.
In case of stereo vision to improve the accuracy of a system, it is necessary to either widen the base length, i.e., the distance between the two cameras, or to increase the resolution of the images. But, to make the alignment of the two cameras rigid, the two cameras should be integrated into same unit; therefore, to widen the base length is difficult, realistically. To increase the resolution of the images, the resolutions of both the CCD and the optic sought to be increased; this will raise the cost even more. System will be expensive due to the two expensive sensors feasibility to detect objects up to 50m.The boundaries of the objects were detected within 0.2 m accuracy.

B. Inference

In order to avoid collision with an object by driver assistance systems alone, detection of lane markings to estimate the course of the vehicle and detection of an occupying area, where the object obstructs on a road, are necessary. In this paper, a detection method for the occupying area of the object that is achieved by the fusion of millimeter-wave radar and a single video camera is presented.
To find the exact occupying area of an object, measurement of the accurate distance from the vehicle to it and detection of its side boundaries are necessary. The distance can be measured accurately by the radar; therefore, we focused on the development of the detection of boundaries of objects are estimated from an image sequence, based on motion stereo technique with the help of the distance measured by the radar. Since the method does not depend on the appearance of objects, it is capable of detecting not only an automobile but also other objects. Furthermore, owing to the help of the distance information, moving objects as well as stationary objects are detectable by the method. In this paper, the feasibility to detect objects up to 50m, which was considered to be a necessary range on urban roads

II. MULTIPLE-SENSOR-COLLISION AVOIDANCE SYSTEM FOR AUTOMOTIVE APPLICATIONS USING AN IMM APPROACH FOR OBSTACLE TRACKING

 Technique Tracking, Collision avoidance, data fusion
 Components: Millimeter Wave (MMW) Radar, Far Infrared (FIR) ,Camera, Interacting Multiple Model(IMM) filter
Multi-sensor collision avoidance system is presented far automotive applications. For obstacle detection and hacking, Millimeter Wave (MMW) Radar and Far Infrared (FIR) Camera are chosen in order to provide object lists to the sensors frackers respectively. The algorithm of the track management, data association, filtering and prediction for both sensors is also presented, focusing on the Kalman filtering. Thus, an Interacting Multiple Model (IMM) filter is designed for coping with all possible modes in which a car may. Finally, distributed fision architecture, using a ceniral hack file for the objects’ frocks, is adopted and described analytically. The results of the work presented will be used, among others, for the purposes of European co-finded project “EUCLIDE”

A. Critical Findings:

 The application of distributed fusion not so robust when compared to centralized fusion
 Traffic congestion in urban areas renders the application of the system impractical
B. Inference
 The application of distributed is not so robust when compared to centralized fusion.
 The idea of using IMM estimator guarantees almost tracking quality in case of normal.

III. IMAGE-BASED DETECTION AND OBSTACLE AVOIDANCE FOR MOBILE ROBOTS

Use two visible laser points as the measuring ruler for automatically adjusting measurement. It is breakthrough compared with using double CCD cameras or the measuring method of setting up reference at measuring points. By using a single image, we can realize the non-contact distance measurement. The measuring principle is to produce two bright projection spots on the measuring surface with two parallel lasers projectors. As a result, the pixel value between the spots will change in accordance with the shooting distance. As long as the pixel value between the two bright spots in the image frame can be identified, distance measurement can be obtained for detection and obstacle avoidance for mobile robots by using a single camera image.

A. Critical Findings

The detection range is limited by emissive power of emission unit and the material of reflecting surface.

B. Inference

This paper has aimed to improve the weak points of former triangulation method and parallel method. As to the measuring parameters, consider the operational errors and propose ways of improvement. The creative design of measuring contraction is the paralleled arrangement of Laser A, Laser Band the camera’s lens central point, to allow two laser bright spots always appear in the image’s central horizontal scanning line. Because the location of bright spot’s image must be on the central horizontal scanning line, it is not necessary to do pattern recognition on the entire picture, and therefore the measuring speed is increased. One of the goals of this research is to propose a distance measuring method without making the adjustment in vertical direction.

IV. A GENERIC OBSTACLE DETECTION METHOD FOR COLLISION AVOIDANCE

 Technique: Stereo camera
 Components: Monocular based method, hybrid and stereo vision-based method, Digital image processing
Obstacle detection is an important component in driver assistance as it helps systems to locate obstacles and then to prevent collisions. The aim of this study is to develop an obstacle detection module through digital images processing. This paper presents a hybrid stereo vision-based method that combines stereo matching and homographic transformation methods. Use as parse matching method in order to get a rapid geometric representation of the road scene that allows us to extract the upper and lower parts of obstacles. According to the position of the lower part, our method uses either the dense stereo matching or the homographic transformation methods to extract the candidate obstacles regions. A verification test is performed to verify whether the retained region is an obstacle or not. In order to avoid collisions, compute the distance to the preceding obstacle to maintain the vehicle carrying the camera at a safety distance. The method presented here was tested on DIPLODOC1 road stereo sequence captured on a highway.
A. Critical Findings
 Stereo vision-based method present a large number of false matching, an imprecise localization of far obstacles
 Complex processing due to the large amount of data being processed
B. Inference
This presented a hybrid obstacle detection method based on stereo vision-based approach. This method combines the advantages of stereo matching and homographic transformation methods. The results obtained for a normal environment condition, in case of flat and non-flat road are promising, method on other stereoscopic sequences which present different environment conditions.To investigate the tracking of the detected obstacles.

V VISION BASED TARGET TRACKING AND COLLISION AVOIDANCE FOR MOBILE

 Technique: Stereoscopic vision system, motion estimation techniques
 Components: Laser range finder cameras
A real-time object tracking and collision avoidance method is presented for mobile robot navigation in indoors environments using stereo vision and a laser sensor. Stereo vision is used to identify the target and to calculate its relative distance from the mobile robot while laser based range measurements are utilized to avoid collision with surrounding objects. The target is tracked by its predetermined or dynamically defined color. The mobile robot’s velocity is dynamically adjusted according to its distance from the target. Experimental results in indoor environments demonstrate the effectiveness of the method. One of the differences of the proposed method compared to others is that in existing approaches there is an effort to control the tracker’s speed and steering angle to follow the target holding the vision system fixed.
In the presented method, each camera of the stereovision system tracks the target using their pan/tilt mechanisms. Thus, the target is being tracked even when the robot is in collision avoidance mode, or when the target is moving in irregular terrains. The velocity of the robot is adjusted according to the relative distance between the robot and the target, calculated using data derived from the stereo vision

A. Critical Finding

 Variance in the particular color components is significant.
 Cost for this implementation high
B. Inference
This paper has presented a vision based target tracking method for mobile robots using stereoscopic vision system and a laser range finder. Experimental results demonstrate the effectiveness and the robustness of the approach. Significant advantages over the other vision based target tracking techniques concern the ability on tracking a variety of objects and that the robot’s motion derives from the horizontal angle of the cameras, which allows the robot to avoid obstacles and keep tracking a target, or to keep tracking a target that moves on irregular terrains.
The presented approach is initialized using motion estimation techniques to specify the target and compute the color histograms. In the presented results the target is the only object moving in the image scene. However, if more than one object is moving, the algorithm can be trained for the one that occupy greater portion of the image. Alternative methods to avoid the randomness of the motion estimation can be used.
Methods such as template matching will specify with accuracy the object that needs to be tracked. This is only for initialization purposes. When the object of interest is located in the image, its color histograms are computed and the rest of the algorithm continues .Future work involves vision based target tracking in outdoor environments where the variance in the color components is significant

VI MAP-BASED NAVIGATION FOR A MOBILE ROBOT WITH OMNI DIRECTIONAL IMAGE SENSOR COPIS

 Technique: Visual navigation
 Components Conic Projection Image Sensor, conic mirror
Designed a new Omni directional image sensor COPIS (Conic Projection Image Sensor) to guide the navigation of a mobile robot. The feature of COPIS is passive sensing of the Omni directional image of the environment, in real-time (at the frame rate of a TV camera), using a conic mirror. COPIS are suitable Sensor for visual navigation in a real world environment. This method used for navigating a robot by detecting the azimuth of each object in the omnidirectional image. The azimuth is matched with the given environmental map.
The robot can precisely estimate its own location and motion (the velocity of the robot) because COPIS observes a 360" view around the robot, even when all edges are not extracted correctly from the omnidirectional image. The robot can avoid colliding against unknown obstacles and estimate locations by detecting azimuth changes, while moving about in the environment. Under the assumption of the known motion of the robot, an environmental map of an indoor scene is generated by monitoring azimuth change in the image. Acoustic sensor can easily acquire a depth map of the environment around the robot.
A. Critical Findings
 In case for estimating the robot’s location, a long computation time is needed.
 Measurement errors caused by swaying motion of the robot.
 For the movement of mobile robot all edges are not extracted correctly from the omnidirectional image.

B. Inference

The map generated and the map-based navigating algorithms in the omnidirectional image sensor COPIS. In this paper, described a method for generating an environmental map. Assuming that robot moves at a known velocity, COPIS could estimate locations of vertical edges and the free space for a mobile robot from the geometrical relation between vertical edges in the image. This free space means space where the robot can move freely. Therefore, there is no problem when the robot moves in this free space. When the robot moves toward its goal position that is out of this free space, it cannot go through the passage because it is estimated as the surface. This problem is a common one related to the edge-based algorithm that yields wire frame description of the scene. To solve this problem, integrated COPIS with the acoustic sensor. It was difficult to distinguish projection data of all edges in the area where there are dense vertical edges. This problem can be solved by improving resolution of the azimuth and using edge sign. In the latter part, described a method for estimating motion of the robot and the locations of unknown obstacles the robot’s motion was estimated by matching azimuth information of vertical edges from both the input image and the given environmental map. The robot motion could successfully be estimated even when only a part of the given environmental map could be detected from the omnidirectional image.
The locations of unknown obstacles were estimated by monitoring loci of azimuth angles of the vertical edges. Since we used triangulation for location measurement of edges, a large error occurred in the front region of the robot when the robot moved straight forward, but the error diminishes when the robot changes direction. Since COPIS can observe a 3600 view around it in real-time, and the precision of the obtained location of the robot and unknown obstacles is sufficient, we believe that COPIS is suitable sensor for navigation. However, a resolution of the image taken by COPIS is not sufficient to understand details of shapes of objects because a 3600 view is projected on one image. The binocular vision using the usual lens with a limited view is useful for understanding details. Therefore, the sensor system integrated with COPIS and the binocular vision will be efficient for navigation and manipulation. For example, a global view taken by COPIS is used for navigation and for finding candidates of interesting objects. Local views taken by binocular vision with the usual lens are used to analyze detailed 3-D structures of only interesting objects. Integrating COPIS with binocular vision.

VII CONC LUSIONS

Almost all navigation robot demands the some sort of obstacle detection, hence obstacle avoidance strategy is of utter importance. Obstacle avoidance robot has a vast field of application. They can be used as services robots, for the purpose of household work and so many other indoor applications. Equally they have great importance in scientific exploration and emergency rescue, there may be places that are dangerous for humans or even impossible for humans to reach directly, then we should use robots to help us. In those challenging environments, the robots need to gather information about their surroundings to avoid obstacles. Nowadays, even in ordinary environments, people also require that robots can detect and avoid obstacles. In this paper we conclude that various obstacle detection technologies used in mobile robot.

ACKNOWLEDGMENT

We acknowledge the Department of Computer science and Engineering of Velammal college of Engineering and Technology to provide the required infrastructure to support this paper work.

References

  1. Amditis, Angelos, et al. "Multiple sensor collision avoidance system for automotive applications using an IMM approach for obstacle tracking." Information Fusion, 2002. Proceedings of the Fifth International Conference on. Vol. 2. IEEE, 2002.
  2. Ben Romdhane, N., Mohamed Hammami, and Hanêne Ben- Abdallah. "A generic obstacle detection method for collision avoidance." Intelligent Vehicles Symposium (IV), 2011 IEEE. IEEE, 2011.
  3. K. Osugi, K. Miyauchi, N. Furui, and H. Miyakoshi, "Development of the scanning laser rather for ACC system," JSAE Rev. 20, pp. 579-1554, 1999
  4. S. Tokoro, "Automotive application systems of a millimeter-wave radar", Proc. IEEE Intelligent Vehicles Symp., pp.260 -265 1996
  5. Tsalatsanis, Athanasios, K. Valavanis, and Ali Yalcin. "Vision based target tracking and collision avoidance for mobile robots." Journal of Intelligent and Robotic Systems 48.2 (2007): 285-304.
  6. Yagi, Yasushi, Yoshimitsu Nishizawa, and Masahiko Yachida. Map-based navigation for a mobile robot with omnidirectional image sensor COPIS.Robotics and Automation, IEEE Transactions on 11.5 (1995): 634-648.