Tag Archive: PCL


Tickling the Panda

So after having narrowly missed out on starting my Master’s and arriving three and a half weeks late thanks to Immigration hooplas, I finally resumed work on the Arducopter we ordered before I left for home last year. The idea to move to a separate platform was two-fold – first, to allow us to use fancier IMUs to dead reckon better, and second, to allow us to accurately timestamp captured images (using hardware triggered captures) and IMU data so as to help us accurately determine baselines while using SfM algorithms. There is also the added benefit of being able to process data on board using the dual cores on the Pandaboard, cutting the latency issues that also crept up while commanding the ARDrone over WiFi.

I’ve been fiddling around a bit with the ArduCopter source code, and realized that the inertial navigation has been designed to work in between periods of spotty GPS coverage, and not as a standalone solution, which is a perfectly sensible idea for the typical use cases of the ArduCopter. However, since we want to not necessarily depend on the GPS, I realized that all that was needed was a little faking of the GPS in the custom UserCode.pde sketch in the ArduCopter source code.

The good part about the ArduCopter implementing the MAVlink protocol is that I can receive all this information directly over serial/telemetry using pymavlink to create a ROS node that reuses all my previous code built for the ARDrone. I knew all that modular programming would come in handy some time ;) One of our major worries was implementing a reliable failsafe mechanism (and by failsafe, I imply dropping dead) and yet again, the beauty of the code being completely accessible came to the rescue again. So, when the client overrides raw RC channel values via MAVLink if the connection gets broken there’s no way for the RC to regain control of the drone. To fix this, I first ensured that I didn’t override channels 5,6 and 7, and then in the 50 Hz user hook I listened to the Ch 7 PWM values to detect flipping the switch and consequently disarmed the motors. I also set the Ch 6 slider to switch between Stabilise and Land so that I could perform a controlled land whenever I wanted to.

So, everything’s in place after a little help from the community, and hopefully I shall be following some trajectories sometime soon. With a tether, of course.

Meanwhile, here’s a little list of extra things I needed to do to get all the ROS packages working well on the PandaBoard after the basic install process. Specifically pcl_ros.

ROS packages on Pandaboard

• Install pcl_unstable from svn source
• For the #error in finding endianness, manually specify the endianness to be PCL_BIG_ENDIAN in the header file that throws the error (Crude hack, I know)
• To get pcl_ros to compile, the vtkfind cmake file is messed up. Patch with https://code.ros.org/trac/ros-pkg/attachment/ticket/5243/perception_pcl-1.0.1-vtk5.8.patch
• ROS_NOBUILDed ardrone2 since we don’t necessarily require the driver to compile. I hate that custom ffmpeg build stuff.
• ran rosrun tf bullet_migration_sed.py for ardrone_pid
• Compiled OpenCV from source, hacked around CMakelists.txt (explicitly added paths)
• Explicitly set ROS_MASTER_URIs and ROS_HOSTNAMEs on both machines when testing out nodes

Advertisements

Picture n’ Pose

So, as I mentioned in my previous post, I am working on recreating a 3D photorealistic environment by mapping image data onto my pointcloud in realtime. Now, the camera and the laser rangefinder can be considered as two separate frames of reference looking at the same point in the world. After calibration, i.e. after finding the rotational and translational relationship (a matrix) between the two frames, I can express (map) any point in one frame to the other. So, here I first project the 3D point cloud points onto the camera’s frame of reference. Then, selecting only the indices which fall within the range of the image (because the laser rangefinder has a much wider field of view than the camera. The region that the camera captures is a subset of the region acquired by the rangefinder. So, I’ll get corresponding indices, say from negative coordinates like (-200,-300) to (800,600) for a 640×480 image) Hence, I only need to colour the 3D points which lie within the (0,0) and (640,480) indices using the RGB values at the corresponding image pixel. The resultant XYZRGB pointcloud is then published, which is what you see here. Obviously, since the spatial resolution of the laser rangefinder is much less than the camera’s, the resulting output is not as dense, and requires interpolation, which is what I am working on right now.

Head on to the images for a more detailed description. They’re fun!

Screenshot of the Calibration Process

So here, first I create a range image from the point cloud, and display it in a HighGUI window for the user to select corner points. As I already know how I have projected the 3D laser data onto the 2D image, I can remap from the clicked coordinates to the actual 3D point in the laser rangefinder data. The corresponding points are selected on the image similarly, and the code then moves onto the next image with detected chessboard corners, till a specified number of successful grabs are accomplished.

Screenshot of the result of extrinsic calibration

Here's a screenshot of Rviz showing the cloud in action. Notice the slight bleed along the left edge of the checkerboard. That's because of issues in selection of corner points in the range image. Hopefully in the real world calibration, we might be able to use glossy print checkerboard so that the rays bounce off the black squares, giving us nice holes in the range image to select. Another interesting thing to note is the 'projection' of the board on the ground. That's because the checkerboard is actually occluding the region on the ground behind it, and so the code faithfully copies whatever value it finds on the image corresponding to the 3D coordinate.

View after VGL

So, after zooming out, everything I mentioned becomes clearer. What this does is that it effectively increases the perceived resolution. The projection bit on the ground is also very apparent here.