Updates! So, among other things, I’m staying back on the BIRD project till December (The folks at UPenn graciously accepted my request to defer my admission by a semester) to do more fancy ML magic with quadrotors! :D (I promise loads of videos) The RISS program ended meanwhile, and I presented my very second academic research poster!

Among the things I’ve been working on this past July and August are working with the ARDrone 2.0, a very nifty off the shelf $300 quadrotor that can be controlled over WiFi. The first thing I set upon was reimplementing the ARDrone ROS driver for the second iteration of this drone, as the ardrone_brown ROS package wasn’t compatible with the 2.0. So started my arduous struggle of trying the 2.0 SDK to compile as a C++ ROS application, what with cascading make files and hundreds of preprocessor directives and undocumented code. Fortunately, with Andreas’ involvement in the project, we could make some sense of the SDK and got it to compile as a ROS driver. It turned out to be a complete rewrite and we managed to expose some additional functionalities of the SDK, namely a mode change for stable hover where the drone uses its downward facing camera to perform optical flow based stabilization amd publishing the IMU navdata at the native frequency (~200 Hz). To demonstrate the former we formulated a novel test we like to call The Kick Test

The first thing I attempted to implement was to get the ARDrone to follow trajectories as sets of waypoints. The major problem in implementing that was that the drone has no way to localise itself – the only feedback that we have are the IMU’s reading, and hence the only way to track movement was by integration, which meant unavoidable drift errors. An alternative could have been using the downward facing camera to perform optical flow, but since we don’t have access to the onboard board, the inherent latencies involved in streaming the video to the base station and sending modified controls back would have been prohibitively expensive. Fortunately, the planner that sends the trajectories replans after every delta movement and so as long as the drone doesn’t drift too much from the intended path, the error is accomodated in the next plan. Towards this end I implemented a PD controller that listened to tf data spewed by a tracker node. The tracker node listens to the navdata stream and continuously updates the drone’s frame’s pose wrt the origin frame. This is required since the command velocities sent to the drone are always relative to the drone’s frame of reference. After a few days of endless tweaking of the gains I finally managed to get reasonable waypoint following behaviour. You can get a short glance at the tracker tf data in RViz in this video


Here the origin is the starting location of the drone, drone_int is the current integrated location of the drone based on the IMU readings and lookahead distance is the location at the specified look ahead distance in the direction of the waypoint

However it was pretty jerky; whenever the drone reached a waypoint, it would suddenly speed up due to the significant change in error. Jerks are, of course, very bad since each jerk introduces a significant error peak as we integrate the velocities to obtain position. So, to mitigate this issue I implemented Drew’s suggested ‘dangling carrot’ approach – with the error term being restricted to a maximum look ahead distance along the direction to the closest waypoint. This finally resulted in the drone following trajectories pretty smoothly.

The next thing I’m working on now is to provide a more intuitive sense of control inputs to the classifier. The intuition being that a bird always heads to the most open region in an image, rather than avoiding oncoming obstacles by rolling right or left. Towards this end I implemented a new controller that allows the drone to be controlled via a mouse. The idea behind the visual controller is to get the drone to move along rays in the image – thus mouse clicks correspond to a pair of yaw and pitch angles. Since the drone can’t move forward while pitching upwards (it would then just move backwards), my controller ramps the altitude with the forward velocity proportional to the tan of the pitch angle. The next step is to now interface this new controller with the classifier and see what kind of results I obtain and possibly get a paper out (yay!).

I also beefed up the wxPython GUI that I worked on earlier the Summer to dynamically parse parameters from launch files (and launch files included within the launch files) and present all the parameters in one unified dialog to modify and save or load as required. This makes life much easier since a lot of parameters often need to be changed manually in the launch files during flight tests.

A sample screenshot of the parameters loaded from a launch file. The user can only launch a node if the parameters are loaded.

Exciting times ahead!