Tag Archive: ROS


Tickling the Panda

So after having narrowly missed out on starting my Master’s and arriving three and a half weeks late thanks to Immigration hooplas, I finally resumed work on the Arducopter we ordered before I left for home last year. The idea to move to a separate platform was two-fold – first, to allow us to use fancier IMUs to dead reckon better, and second, to allow us to accurately timestamp captured images (using hardware triggered captures) and IMU data so as to help us accurately determine baselines while using SfM algorithms. There is also the added benefit of being able to process data on board using the dual cores on the Pandaboard, cutting the latency issues that also crept up while commanding the ARDrone over WiFi.

I’ve been fiddling around a bit with the ArduCopter source code, and realized that the inertial navigation has been designed to work in between periods of spotty GPS coverage, and not as a standalone solution, which is a perfectly sensible idea for the typical use cases of the ArduCopter. However, since we want to not necessarily depend on the GPS, I realized that all that was needed was a little faking of the GPS in the custom UserCode.pde sketch in the ArduCopter source code.

The good part about the ArduCopter implementing the MAVlink protocol is that I can receive all this information directly over serial/telemetry using pymavlink to create a ROS node that reuses all my previous code built for the ARDrone. I knew all that modular programming would come in handy some time ;) One of our major worries was implementing a reliable failsafe mechanism (and by failsafe, I imply dropping dead) and yet again, the beauty of the code being completely accessible came to the rescue again. So, when the client overrides raw RC channel values via MAVLink if the connection gets broken there’s no way for the RC to regain control of the drone. To fix this, I first ensured that I didn’t override channels 5,6 and 7, and then in the 50 Hz user hook I listened to the Ch 7 PWM values to detect flipping the switch and consequently disarmed the motors. I also set the Ch 6 slider to switch between Stabilise and Land so that I could perform a controlled land whenever I wanted to.

So, everything’s in place after a little help from the community, and hopefully I shall be following some trajectories sometime soon. With a tether, of course.

Meanwhile, here’s a little list of extra things I needed to do to get all the ROS packages working well on the PandaBoard after the basic install process. Specifically pcl_ros.

ROS packages on Pandaboard

• Install pcl_unstable from svn source
• For the #error in finding endianness, manually specify the endianness to be PCL_BIG_ENDIAN in the header file that throws the error (Crude hack, I know)
• To get pcl_ros to compile, the vtkfind cmake file is messed up. Patch with https://code.ros.org/trac/ros-pkg/attachment/ticket/5243/perception_pcl-1.0.1-vtk5.8.patch
• ROS_NOBUILDed ardrone2 since we don’t necessarily require the driver to compile. I hate that custom ffmpeg build stuff.
• ran rosrun tf bullet_migration_sed.py for ardrone_pid
• Compiled OpenCV from source, hacked around CMakelists.txt (explicitly added paths)
• Explicitly set ROS_MASTER_URIs and ROS_HOSTNAMEs on both machines when testing out nodes

Advertisements

boost is the secret of my energy!

One of the bigger advantages of python is ostensibly its ability to wrap C/C++ code and expose its functionality to normal python code. I was attempting to do exactly that recently, and got pretty frustrated at my inability to use boost::python to get my trajectory follower PID code library exposed to my python code so that I could write scripts for calibrating the ARDrone in the mocap lab since my tracking and plotting are done in python.
Right off the blocks, when I tried defining the simplest boost::python class out of my PIDController class with only the constructor, I kept receiving this compilation error of the compiler not being able to find the copy constructor definition. This was odd, since I had clearly declared only my parameterized constructor, and I had no clue where the copy consstructor was coming from. (The error message said something about boost::reference_ptr)
Turns out that by default boost::python expects your class to be copyable (i.e. it has a copy constructor and can be referenced by value), and so you need to clearly specify if the class is not meant to be copyable by adding a boost::noncopyable tag to your boost::python _class template declaration.

This done, I then realized that my code used tf::Vector3 and tf::Matrix3x3, and I would need to build a wrapper to convert the python provided numpy array to these types. I also needed to convert the list of waypoints from a numpy array to a vector of floats, as required in the library. Also, since my trajectory follower library also creates a ROS node, and needs to broadcast and receive tf data, I needed to initialise ROS using ros::init() before creating an object of my PIDController class, since it declared a ros::NodeHandle(), which requires a ros::init() already declared.

Thus, I created a Wrapper class with the sole member being a pointer object of PIDController. My constructor takes in the default arguments, calls ros::init(), and then allocates a new PIDController object to my pointer. I have another function that accepts a numpy array as a boost::python::numeric::array, figures out the length of the array using boost::python::extract and then uses it again to extract values and then push it to the PIDController instance.

To be able to import the generated module, I had to add my library path to PYTHONPATH, and in my CMakeLists I added python to the rosbuild_link_boost definition. Note that the name of the module and the generated library file should match exactly. Also, in order to get boost::python to understand that I was sending in a numpy ndarray, I had to declare so in my BOOST_PYTHON_MODULE.

BOOST_PYTHON_MODULE(libfollow_trajectory)

{
boost::python::numeric::array::set_module_and_type( "numpy", "ndarray");
class_<PidControllerWrapper, boost::noncopyable>("PidController", init<float, float, float, float, float, float, float, float>() );

}
Whew, and that’s it! It sure was longer and more complicated than I expected, but it works like a charm!

Also, working in mocap is fun! The ARDrone, however, is quite annoying. It drifts, ever so much. Here is an example of a series of plots I calculated while the controller tried to follow a parabolic curve. Funnily enough, you can see that the drone actually always drifts to the left of where it thinks it is. One of the reasons why I conducted this test was to determine if this behaviour was repeatable, and if so, then to take into account this drift, and use an iterative learning controller to understand what new trajectory to provide in order to get the drone to follow the intended path (which, in this case would be some trajectory to the right of the provided trajectory)
Image

Also, we are getting an arducopter! Can’t wait to get my hands on it! :D

Drone on!

Updates! So, among other things, I’m staying back on the BIRD project till December (The folks at UPenn graciously accepted my request to defer my admission by a semester) to do more fancy ML magic with quadrotors! :D (I promise loads of videos) The RISS program ended meanwhile, and I presented my very second academic research poster!

Among the things I’ve been working on this past July and August are working with the ARDrone 2.0, a very nifty off the shelf $300 quadrotor that can be controlled over WiFi. The first thing I set upon was reimplementing the ARDrone ROS driver for the second iteration of this drone, as the ardrone_brown ROS package wasn’t compatible with the 2.0. So started my arduous struggle of trying the 2.0 SDK to compile as a C++ ROS application, what with cascading make files and hundreds of preprocessor directives and undocumented code. Fortunately, with Andreas’ involvement in the project, we could make some sense of the SDK and got it to compile as a ROS driver. It turned out to be a complete rewrite and we managed to expose some additional functionalities of the SDK, namely a mode change for stable hover where the drone uses its downward facing camera to perform optical flow based stabilization amd publishing the IMU navdata at the native frequency (~200 Hz). To demonstrate the former we formulated a novel test we like to call The Kick Test

The first thing I attempted to implement was to get the ARDrone to follow trajectories as sets of waypoints. The major problem in implementing that was that the drone has no way to localise itself – the only feedback that we have are the IMU’s reading, and hence the only way to track movement was by integration, which meant unavoidable drift errors. An alternative could have been using the downward facing camera to perform optical flow, but since we don’t have access to the onboard board, the inherent latencies involved in streaming the video to the base station and sending modified controls back would have been prohibitively expensive. Fortunately, the planner that sends the trajectories replans after every delta movement and so as long as the drone doesn’t drift too much from the intended path, the error is accomodated in the next plan. Towards this end I implemented a PD controller that listened to tf data spewed by a tracker node. The tracker node listens to the navdata stream and continuously updates the drone’s frame’s pose wrt the origin frame. This is required since the command velocities sent to the drone are always relative to the drone’s frame of reference. After a few days of endless tweaking of the gains I finally managed to get reasonable waypoint following behaviour. You can get a short glance at the tracker tf data in RViz in this video


Here the origin is the starting location of the drone, drone_int is the current integrated location of the drone based on the IMU readings and lookahead distance is the location at the specified look ahead distance in the direction of the waypoint

However it was pretty jerky; whenever the drone reached a waypoint, it would suddenly speed up due to the significant change in error. Jerks are, of course, very bad since each jerk introduces a significant error peak as we integrate the velocities to obtain position. So, to mitigate this issue I implemented Drew’s suggested ‘dangling carrot’ approach – with the error term being restricted to a maximum look ahead distance along the direction to the closest waypoint. This finally resulted in the drone following trajectories pretty smoothly.

The next thing I’m working on now is to provide a more intuitive sense of control inputs to the classifier. The intuition being that a bird always heads to the most open region in an image, rather than avoiding oncoming obstacles by rolling right or left. Towards this end I implemented a new controller that allows the drone to be controlled via a mouse. The idea behind the visual controller is to get the drone to move along rays in the image – thus mouse clicks correspond to a pair of yaw and pitch angles. Since the drone can’t move forward while pitching upwards (it would then just move backwards), my controller ramps the altitude with the forward velocity proportional to the tan of the pitch angle. The next step is to now interface this new controller with the classifier and see what kind of results I obtain and possibly get a paper out (yay!).

I also beefed up the wxPython GUI that I worked on earlier the Summer to dynamically parse parameters from launch files (and launch files included within the launch files) and present all the parameters in one unified dialog to modify and save or load as required. This makes life much easier since a lot of parameters often need to be changed manually in the launch files during flight tests.

A sample screenshot of the parameters loaded from a launch file. The user can only launch a node if the parameters are loaded.

Exciting times ahead!

RI, CMU (And so should you!)

A lot’s happened over this month. I graduated (yay!), apparently, but I don’t get the degree until about 5 years. ( Shakes fist at DU. :-/ ) and arrived at the RI in the first week of June. Also, I couldn’t attend the Jed-i finals, but found a good Samaritan to present it in my stead. So here’s to you Sanjith, for helping me at least present my (reasonably incomplete) project at the competition! There was also a little heartburn – I got selected to the national interview of the NS Scholarship for $30K, but they were adamant that I was to be present in person at Mumbai on the 15th. No amount of talking or convincing helped there, and so I had to let it go. Sad.

Anyway, so out here the first couple of weeks and a half I dabbled in the untidy art of making a comprehensive wxPython GUI for starting all the ROS nodes that need to be fired while testing the MAV. Along the way, I figured out a lot of Python, and threading issues. One of the most annoying issue in implementing wxPython is that it requires all changes to the UI members to be done from its own (main) thread. So, for instance in my ROS image callbacks, I could not simply lock the running threads and assign the buffer data of the incoming openCV image to the staticBitmap – doing so let monsters run through the code, what with xlib errors and random crashes. After a lot of searching, I finally realised that ALL members should be driven ONLY through events, since the event methods are always called by wxPython’s main UI thread. Another interesting tidbit was to callAfter() the event, so that the event fired only after the subscriber thread’s control over the method left. It was also fun working with the subcommand process to initialize the ROS node binaries and using Python’s awesomeness to lookup the binary files in the ROS package path and generate dynamic drop down lists from it.

A first (and last) look at the GUI. The toggle buttons that are Red signify that the associated ROS node could not be fired successfully, and the Green one are those that could. The code keeps a track of the processes, and so even if the node is killed externally, it is reflected in the GUI.

The next thing I worked on over the past few days was trying to determine the Most Likely Estimate of the distribution of trees as a Poisson process. So the rationale behind it, as I understand it, is to have a better sense of planning paths beyond the currently visible trees in the 2D image obtained by the MAV.

The dataset provided is of the Flag Staff Hill at CMU and is a static pointcloud of roughly 2.5 million points. The intersection of the tree trunks with the parallel plane has been highlighted

So I had a dense pointcloud of Flag Staff hill that I used as my dataset, and after a simple ground plane estimation, I used a parallel plane 1.5 meters above this ground plane to segment out tree trunks, whose arcs (remember that the rangefinder can’t look around the tree) I then used in a simple contour detection process to produce a scatter plot of the trees. I then took this data and used approximate nearest neighbours to determine the distance to the farthest tree in a three tree neighbourhood. I then used the distance to characterize the area associated with each tree and its neighbours and used it for binning to determine the MLE. The MLE was then simply the arithmetic mean of these binned areas. I also wrote up a nice little report in Latex on this.

A plot of the recognized trees after the contour detection. Interestingly, notice the existence of tres in clumps. It was only later that I realized that the trees in the dataset were ghosted, with another tree offset by a small distance.

Not surprisingly, I’ve grown to like python a LOT! I could barely enjoy writing code in C++ for PCL after working on Python for just half a month. Possible convert? Maybe. I’ll just have to work more to find out its issues. /begin{rant} Also, for one of the best departments in CS in the US, the SCS admin is really sluggish in granting access. /end{rant}.

Picture n’ Pose

So, as I mentioned in my previous post, I am working on recreating a 3D photorealistic environment by mapping image data onto my pointcloud in realtime. Now, the camera and the laser rangefinder can be considered as two separate frames of reference looking at the same point in the world. After calibration, i.e. after finding the rotational and translational relationship (a matrix) between the two frames, I can express (map) any point in one frame to the other. So, here I first project the 3D point cloud points onto the camera’s frame of reference. Then, selecting only the indices which fall within the range of the image (because the laser rangefinder has a much wider field of view than the camera. The region that the camera captures is a subset of the region acquired by the rangefinder. So, I’ll get corresponding indices, say from negative coordinates like (-200,-300) to (800,600) for a 640×480 image) Hence, I only need to colour the 3D points which lie within the (0,0) and (640,480) indices using the RGB values at the corresponding image pixel. The resultant XYZRGB pointcloud is then published, which is what you see here. Obviously, since the spatial resolution of the laser rangefinder is much less than the camera’s, the resulting output is not as dense, and requires interpolation, which is what I am working on right now.

Head on to the images for a more detailed description. They’re fun!

Screenshot of the Calibration Process

So here, first I create a range image from the point cloud, and display it in a HighGUI window for the user to select corner points. As I already know how I have projected the 3D laser data onto the 2D image, I can remap from the clicked coordinates to the actual 3D point in the laser rangefinder data. The corresponding points are selected on the image similarly, and the code then moves onto the next image with detected chessboard corners, till a specified number of successful grabs are accomplished.

Screenshot of the result of extrinsic calibration

Here's a screenshot of Rviz showing the cloud in action. Notice the slight bleed along the left edge of the checkerboard. That's because of issues in selection of corner points in the range image. Hopefully in the real world calibration, we might be able to use glossy print checkerboard so that the rays bounce off the black squares, giving us nice holes in the range image to select. Another interesting thing to note is the 'projection' of the board on the ground. That's because the checkerboard is actually occluding the region on the ground behind it, and so the code faithfully copies whatever value it finds on the image corresponding to the 3D coordinate.

View after VGL

So, after zooming out, everything I mentioned becomes clearer. What this does is that it effectively increases the perceived resolution. The projection bit on the ground is also very apparent here.

Range find(h)er!

So the past few weeks I’ve been working on this really interesting project for creating a 3D mobile car environment in real time for teleop (and later autonomous movement). It starts with calibrating a 3D laser rangefinder’s data and a monocular camera’s feed which allows me, after the calibration is done, to map each coordinate in my image to a 3D point in the world coordinates (within the range of my rangefinder, of course). So, any obstacles coming up in the real world environs can then be rendered accurately in my immersive virtual environment. Now since everything’s in 3D the operator’s point of view can be moved around to any required orientation. So, for instance, while parking all that is needed to be done is to shift to an overhead view. In addition, since tele op is relatively laggy, the presence of this rendered environment gives the op a fair idea of the immediate environment, and the ability to continue along projected trajectories rather than stopping for resumption of connection.

So, as the SICK nodder was taking some time to arrive, I decide to play around Gazebo, and simulate the camera, 3D Laser rangefinder and the checkerboard pattern for calibration within it, and thus attempt to calibrate the camera-rangefinder pair from within the simulation. In doing so, I was finally forced to shift to ubuntu (Shakes fist at lazy programmers) which, although smoother, isn’t entirely as bug free as it is made out to be. So, I’ve created models for the camera and rangefinder and implemented gazebo dynamic plugins within them to simulate them. I’ve also spawned a checkerboard textured box, which using a keyboard manipulated node I move around the environment. So this is what it looks like right now

Screenshot of w.i.p.

So here, the environment is in the Gazebo window at the bottom, where I've (crudely) annotated the Checkerboard, camera and Laser Rangefinder. There are other bodies to provide a more interesting scan. The point cloud of the scan can be seen in the rviz window in the center, and the camera image at the right. Note the displacement between the two. The KeyboardOp node is running in the background at the left, listening for keystrokes, and a sample HighGui window displaying the detected Chessboard Corners at the top Left.

Looks optimistic!

Eelectricity

So I went around trying to install ROS Electric on my Fedora machine again (Had to increase the partition size earlier today using a GParted iso). Went about the procedure on the website – fortunately this time atleast the initial ROS install completed without any issues. So the deal is that with this iteration quite a lot of the dependencies of ROS have been offloaded to the system  – something that I realized as soon as I tried rosmaking vision_opencv. I needed to have Opencv 2.3.1, and Fedora’s repos only provided 2.2, so I built OpenCV from source using cmake, and threw in TBB, libv4l, gstreamer stuff and unicap (since I’ll be dealing a lot with video). Once it was built, as in my very first post, copied the generated opencv.pc file from /usr/local/lib/pkgconfig to /usr/lib64/pkgconfig. No dice, with rosmake crying about not being able to find opencv2.3.1. Then judging by one of the prompts created a clone in the same folder called opencv-2.3.1.pc, a ldconfig -v, and voila!

The next thing I set upon trying to make was simulator_gazebo, and was  (predictably) thrown up loads of dependency issues. Had to install yampl-cpp-devel, tinyxml-devel, vtk-devel, libyaml-devel, rosdep install for gazebo_tools, build assimp, copied the /usr/include/ffmpeg/* to /usr/include (to fix missing avformat.h), hdf5-devel, added -ltinyxml flag to  /gazebo/build/CMakeCache.txt (and in gazebo_tools), added a VTK_DIR:FILEPATH=/usr/lib64/vtk-5.6 to pcl_ros/CMakeCache.txt … Didnt work, used http://www.cmake.org/pipermail/cmake/2006-March/008633.html and changed VTK REQUIRED to VTK% required. Also changed the path in the URL file to point to lib64/vtk-5.6

Then I  used the info at http://www.cmake.org/pipermail/cmake/2006-March/008633.html and changed VTK REQUIRED to VTK5 required in the cmake files. Also changed the path in the URL file to point to lib64/vtk-5.6 , which fixed the VTK errors. It was around this time when I created this question on answers.ros.org which got finally resolved (yay!)