If you look closely at the proof in the paper, they state that for a point Pf in the Laser frame of reference, the coordinates are [ X Z 1 ] since they set the plane of the laser scanner to be Y = 0. Thus, your laser rangefinder returns would go in Pf.

Anyway, the authors have released their source code online in Matlab. http://research.engineering.wustl.edu/~pless/code/calibZip.zip Have a look at that to figure out the inner workings. ]]>

sorry to bother you again. i went through the approach proposed in the paper but i am still unsure how to achieve this in opencv. for instance i first calibrated the camera and thentrying to use equation 3 and equation 4 manually to figure out the orientation and translation. i dont know whether i can do it through opencv or not? second thing the solution you posted above by using solvepnp what would be Y in this case for laser range finder wont it be 0?where Z=height of scanner, then how will be calibration possible if we are not giving any value for distance or depth? Any thoughts would be highly appreciable.

Thanks alot. ]]>

So, for what you want to do, you would only be able to correlate points in the plane of your SICK. To be able to correlate the points, your camera and the SICK should not be on the same plane. Visualize this as a flat sector spreading out parallel to the ground that you shall be able to see in your camera 2D image. After the extrinsic calibration, you shall be able to predict the depth of the corresponding region in the 2D image based on the relation you obtain after calibration.

This paper http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1389752&tag=1 does exactly what you want to do, and I recommend going through it to get a better idea.

I think you can still use solvePnP for finding the relationship – just provide the height of the SICK scanning plane as the Z coordinates for your laser scan points. Essentially, what you would be doing is figuring out which line of pixels corresponds to which line in the laser scan, and send these pair of points to solvePnP.

Hope that helps! ]]>

Thanks alot

Regards ]]>

Thanks! So, first for the objects in Gazebo I made URDF models and for the checkerboard I created a png file and mapped that to a solid box as texture. Then, I used Gazebo plugins to simulate a laser rangefinder and a camera and I set their initial positions in the environment. I also used a small ROS node to move the board around using a keyboard (I control the individual rotation about the axes)

For the extrinsic calibration, first, have a look at the next post (and the image captions!) https://icoderaven.wordpress.com/2012/02/08/pnp/

Now, what is actually going on is that both the camera and the lidar are looking at the same point in world space. What we need is a mapping between the two frames of reference, i.e. a transformation matrix consisting of a rotational and a translational component such that we can represent a coordinate in our lidar frame of reference by a corresponding coordinate in our camera frame of reference. To find out this matrix, we need to evaluate a set of point correspondences between the two frames, which is where the OpenCV SolvePnP method comes in (http://opencv.willowgarage.com/documentation/cpp/camera_calibration_and_3d_reconstruction.html#cv-solvepnp). From the published pointcloud I pull the information in a PCL (http://pointclouds.org/) point cloud. Then I create a 2D range image of the view, which is a method within PCL. Now, the points that I click on the range image can be mapped back to the corresponding 3D coordinate in my pointcloud. I then click on the closest matching points on the camera image and then create an array of these sets of 3D-2D pairs and send it over to SolvePnP to return me the required rotation and translation vectors. (I’m also sneaky and include camera calibration in this process – which explains why I use cvFindCheckerboardCorners (http://opencv.willowgarage.com/documentation/camera_calibration_and_3d_reconstruction.html#findchessboardcorners) method to evaluate camera calibration.)

Now after I have the rotation and translational vectors, I basically find the corresponding coordinates of all the point cloud data points in the camera FoV in the camera frame. For the points that do fall in the dimensions of the image, I just map their colours to the pointcloud data point, and then publish this new coloured pointcloud. Which is why you’ll notice in the third screenshot why the shadow region of the checkerboard on the ground also has a checkerboard pattern – the point is correctly mapped to a point in the camera frame, but since the image is 2D and the checkerboard is covering that region, it faithfully maps the checkerboard colour to the 3D datapoint.

I would have released the code as open source, but unfortunately company policy forbade me from disclosing it. If you have any more questions, feel free to ask me here. ]]>

Ever wondered to try creating UI’s using web tech? Check http://appjs.org/ :) ]]>

Well, if you can access https sites, then this method shall work for you as well! That’s what connecting to port 443 is about. ]]>

For github.. my college has a proxy.. which is even worse. i never figured how to use git pass it :/

Btw.. you have a repo?