ISSN: 2229371X
Suchi Upadhyay ^{1}, S.K.Singh ^{2} ,Manoj Gupta ^{3} ,Ashok K. Nagawat^{4}

Related article at Pubmed, Scholar Google 
Visit for more related articles at Journal of Global Research in Computer Sciences
This Paper deals with calibrate a camera to find out the intrinsic and extrinsic camera parameters which are necessary to recover the depth estimation of an object in stereovision system
Keywords 
Camera Calibration, Tsai’s algorithm, Stereovision, Linear Calibration, NonLinear Calibration, Depth estimation 
INTRODUCTION 
A 3D projection is a mathematical transformation used to project three dimensional points onto a two dimensional plane. Often this is done to simulate the relationship of the camera to subject. 3D projection is often the first step in the process of representing three dimensional shapes two dimensionally in computer graphics. Perspective projection is a type of rendering that graphically approximates on a planar (2D) surface the images of 3D objects so as to approximate actual visual perception. 
Camera calibration is a necessary step in 3D computer vision in order to extract metric information from 2D images. Much work has been done, starting in the photogrammetry community, and more recently in computer vision. Zhengyou Zhang [1] gives an idea on “Camera Calibration with OneDimensional Objects”. According to the dimension of the calibration objects, he can classify the calibration techniques into three categories: 
Selfcalibration: Here, no calibration object is used and only image point correspondences are required. 
2D plane based calibration 
3D reference object based calibration 
Camera calibration is the process of determining the internal and external parameters of the camera so that the location of the objects observed by the camera can be determined [2]. If accurate camera calibration methods are used, the problem of recovering depth information from stereo image pairs is significantly simplified. 
Basically there are two parameters that include camera calibration technique, one is intrinsic parameter and another is extrinsic parameter. The intrinsic parameters define the internal geometric and optical characteristics of the camera whereas the extrinsic parameters define the position and orientation of the camera within an arbitrary defined 3D coordinate system [3]. 
2D Case of Camera Calibration 
2D plane based calibration technique in this category requires observing a planar pattern (Figure 2). Different from Tsai’s technique, the knowledge of the plane motion is not necessary, because almost anyone can make such a calibration pattern by themselves. 
Assumption 
Pixel coordinates: 
The measurement in pixel coordinates is taken from the 2D projected image plane. 
(a) Unit in pixels 
(b) Origin: top left corner 
(c) x values increase from left to right 
(d) y values increase from top to bottom 
World coordinates: 
The measurement in world coordinates is taken from the arbitrary world reference frame. 
User defined 
In order to measure the real size of objects there must be a mapping from each pixel coordinate to a world coordinate. 
Camera parameters introduced 
Camera parameters are both interior and exterior such as follows: 
INTERIOR PARAMETERS: 
Geometry of CCD 
dx : Centertocenter distance of pixels in x direction 
dy : Centertocenter distance of pixels in y direction 
Principal point 
xp : xcoordinate for principal point, relative to center of 
image 
yp : ycoordinate for principal point, relative to center of image 
Camera constant 
f : focal length 
Lens distorsion coefficients 
k1 : first order lens distortion coefficient 
k2 : second order lens distortion coefficient 
k2 : third order lens distortion coefficient 
Frame buffer property 
x s : scaling factor 
Exterior parameters: 
Rigid body transform 
Rx: rotation around xaxis 
Ry: rotation around yaxis 
Rz: rotation around zaxis 
Tx: translation in x direction 
Ty: translation in y direction 
Tz: translation in z direction 
3D Case of Camera Calibration 
3D Camera calibration is performed by observing a calibration object whose geometry in 3D space is known with very good precision. 
The calibration object usually consists of two or three planes orthogonal to each other (Figure 5). Sometimes, a plane undergoing a precisely known translation is also used, which equivalently provides 3D reference points. 
Tsai’s Perspective Projection Camera Model 
Tsai uses a pinhole camera model to describe the transformation of points in 3D space to pixels in the camera’s frame buffer. Tsai's camera model consists of 11 parameters: six extrinsic, “exteriororientation" parameters (Rx,Ry,Rz,Tx,Ty,Tz) and five intrinsic, “interiororientation" parameters (f,Cx,Cy,sx,k1). For a fixed lens all 11 camera parameters are constants estimated from calibration data taken from a single camera view (i.e. the exterior and interior orientation of the camera is fixed). 
Tsai’s camera model contains the following parameters: 
R A 3x3 rotation matrix 
R 
T A translation vector 
F The focal length of the camera 
Sx An uncertainty factor introduced by the image capture hardware 
k1 The radial lens distortion coefficient 
(Cx,Cy) The image centre 
Of these R, T, f, sx and k1 are to be determined using Tsai’s algorithm for calibration. Cx and Cy can be determined beforehand, and will not need to be recalibrated. 
In Tsai's model, illustrated in Fig. 1, the origin of the cameracentered coordinate system (xc,yc,zc) coincides with the front nodal point of the camera; the zc axis coincides with the camera's optical axis. The image plane is assumed to be parallel to the (xc,yc) plane and at a distance f from the origin, where f is the pinhole camera's effective focal length. The relationship between the position of a point P within the world coordinates (xw,yw,zw) and the point's image in the camera's frame buffer (Xf,Yf) is defined by a sequence of coordinate transformations. The first transformation is a rigid body rotation and translation from the worldcoordinate system (xw,yw,zw) to the cameracentered coordinate system (xc, yc, zc). This is described by 
is the 3x3 rotation matrix describing the orientation of the camera in the worldcoordinate system. R can also be expressed as 
R = Rot (Rx) Rot (Ry) Rot (Rz) (3) 
the product of three rotations around the x, y, and z axes of the worldcoordinate system. 
The second transformation is a perspective projection (using an ideal pinholecamera model) of the point in the camera coordinates to the position of its image in undistorted sensorplane coordinates, (Xu,Yu). This transformation is described by 
The third transformation, illustrated in Figure 6, is from the undistorted (ideal) position of the point's image in the sensor plane to the true position of the point's image, (Xd,Yd), which results from geometric lens distortion. This is described by 
where, k1 is the coefficient of radial lens distortion. 
The _nal transformation is between the true position of the point's image on the sensor plane and its coordinates in the The final transformation is between the true position of the point's image on the sensor plane and its coordinates in the camera's frame buffer, (Xf,Yf). This is described by 
where, Cx and Cy are the coordinates (in pixels) of the intersection of the zc axis and the camera's sensor plane; dx and dy are the effective centertocenter distances between the camera's sensor elements in the xc and yc directions; and sx is a scaling factor to compensate for any uncertainty in the ratio between the number of sensor elements on the CCD and the number of pixels in the camera's frame buffer in the x direction [7]. 
TSAI’S ALGORITHM 
The algorithm given by Tsai is a twostage process designed to be performed without operator assistance. It calibrates the R, T, f, k1 and sx parameters from the above camera model (Figure 3). The algorithm executes quickly on PC hardware due to the absence of large nonlinear searches. 
A calibration pattern is required by this algorithm and Tsai provides different versions for coplanar and noncoplanar calibration patterns. It is a single view algorithm; however it can be adapted to be used with multiple views of the calibration pattern. The first stage of the process determines the extrinsic parameters sx, R and the first two components of the translation vector, Tx and Ty. The focal length f and the z component of the translation vector Tz are also estimated at this stage. This is achieved by solving a system of linear equations whose input is the coordinates of points in the calibration pattern, both in the image and in the real world. The various parameters are then recovered from the solution to this system. The second stage of the process involves a steepest descent search. This is used to determine the radial distortion factor k1 which cannot be determined from the calibration pattern; f and Tz are also adjusted during the search [3]. 
SYSTEM IMPLEMENTATION 
Calibrate a projective camera using a Linear Least Square approach and without taking radial distortion into account. 
Given a MATLAB data file that contains 3D coordinates of some points in the scene along with their 2D projection in the image. We have to write a MATLAB function called LinearCalib that computes the projective camera parameters. The signature of the function should be as follows Function [CamCalib] = LinearCalib [Points 3D, Points 2D] Inputs: Points 2D = A 2xN matrix of N 2D points. Points 3D = A 4xN matrix of N 4D homogeneous coordinates. 
Outputs: CamCalib = A 3x4 projective camera matrix 
H/W and S/W Requirements 
Basic H/W Requirements 
CPU: IntelOriginal, PentiumIV, 2.4 GHz of faster 
RAM: 256 MB or greater for best performance 
Hard Disk: 40 GB with at least 10 GB of free space 
Webcam: Logitech Quick cam Pro 3000 
Digital Camera: Canon Power Shot A420, 4.0 Mega pixels with Magnification Ratio 11x. 
Basic S/W Requirements 
Operating System: Windows XP with Service Pack 2 
Development Tool: MATLAB 7.1 
DirectX: Release 9 or later 
Flow Diagram of the Project 
this work consists of two parts: linear and nonlinear camera calibration techniques. For linear camera calibration technique; we have used a single camera to take the view of this grid pattern object and for nonlinear camera calibration technique; two cameras or the same camera has been used twice to take the separate views of the same. The nonlinear camera calibration technique gives us the depth information of an object in stereovision system. So the flow diagram of work can be considered as the following two parts: 
Flow Diagram for Camera Calibration 
Step1: Two planes are placed at a right angle with checkerboard patterns. 
Step2: We know positions of the selected points with respect to the world coordinate system of the target. Step3: We position camera in front of target and find image coordinates in pixels. 
Step4: We obtain functions in MATLAB that calculate the projection matrix M. 
Step5: Now we find the camera intrinsic and extrinsic parameters with respect to the target in the world reference frame. 
Flow Diagram for Depth Estimation 
Step1: Two planes are placed at a right angle with checkerboard patterns 
Step2: Capture the image with respect to a fixed world coordinate which is to be considered. 
Step2: Capture the same image from a shifted position with respect to a fixed world coordinate 
Step3: Generate 2D projected image, taken as input to calculate the projection matrix M. 
Step4: Projection matrix M gives us the camera parameters fx , fy , Ox , Oy , R and T. Step5: From these parameters we can obtain the depth ration r l Z / Z . 
CONCLUSION 
In this paper we discuss an approach to calibrate a single camera and estimate the depth of an object in a stereo vision system. Camera calibration is done to get the intrinsic and extrinsic parameters of a camera. These parameters can further be used to acquire the knowledge of depth information of an object in stereovision system. The depth together with the orientation can be used to reconstruct the 3D image of the object in a two dimensional plane which is also called 3D reconstruction. 
As our previous works give an idea of depth information of an object in stereovision system and from that depth information we can obtain the mesh concept about the object and we hope that it will help us to reconstruct a 3D object in the near future. 
References 
