Latest Entries »

Tick tock

Just a bunch of tips to get a WiFi access point working on startup on the Pandaboard, in addition to getting ntp to work on our local network.

Use
http://nims11.wordpress.com/2012/04/27/hostapd-the-linux-way-to-create-virtual-wifi-access-point/

and
http://www.ubuntugeek.com/network-time-protocol-ntp-server-and-clients-setup-in-ubuntu.html

to get a hostapd.conf

interface=wlan0
driver=nl80211
ssid=flyingpanda
hw_mode=g
channel=1

And the dchpd.conf

File: /etc/dhcp/dhcpd.conf Modified

ddns-update-style none;
ignore client-updates;
authoritative;
option local-wpad code 252 = text;

subnet
10.0.0.0 netmask 255.255.255.0 {
# --- default gateway
option routers
10.0.0.1;
# --- Netmask
option subnet-mask
255.255.255.0;
# --- Broadcast Address
option broadcast-address
10.0.0.255;
# --- Domain name servers, tells the clients which DNS servers to use.
option domain-name-servers
10.0.0.1, 8.8.8.8, 8.8.4.4;
option time-offset
0;
range 10.0.0.3 10.0.0.13;
default-lease-time 1209600;
max-lease-time 1814400;
}

The next thing is to NTP timesync the two. The complication is that the server has to be the laptop that connects to the pandaboard, which hosts the network. Hence, we need to detect when a laptop connects to the pandaboard’s wifi hotspot, and then correspondingly ntpupdate.

The thing to remember here is that ntp works in a hierarchial setup, i.e. there are different ‘strata’ of time accuracy with an atomic clock being stratum 0 and internet time servers being 1 and 2 and so on. In case a local system is not ntp synced, it’s stratum is set by the daemon to be 16. So in the case when out in the field, if the server (laptop) has not been connected to the internet, although the client (pandaboard) can see what time the server is providing, it also sees that the stratum level is 16, and so decides to ignore it. To avoid this, we artificially ‘fudge’ a local timeserver on our server at a stratum level of 10 so that in the event that there’s no available internet time server at a higher strata, the local clock is used to sync all the clients.

Thus, on the server,  relevant lines in /etc/ntp.conf are


# By default, exchange time with everybody, but don't allow configuration.

restrict -4 default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery

# Local users may interrogate the ntp server more closely.
restrict 127.0.0.1
restrict ::1

#Allow the pandaboard to query for time
restrict 192.168.2.0 mask 255.255.255.0 nomodify notrap

#Set my stratum to be artificially high
server 127.127.1.0
fudge 127.127.1.0 stratum 10

# If you want to provide time to your local subnet, change the next line.
# (Again, the address is an example only.)
broadcast 192.168.2.255

And on the pandaboard, /etc/ntp.conf is

restrict default notrust nomodify nopeer
restrict 192.168.2.0 mask 255.255.255.0
restrict 127.0.0.1
server $FIRST_PEER
#I want to listen to time broadcasts on my local subnet
disable auth
broadcastclient

Note that $FIRST_PEER is an environment variable I define for the first IP address that connects to the pandaboard over WiFi in my ~/.bashrc

#Also try to autodetect first connected peer
#If the environment variable is already defined, do nothing
if [ -z "$FIRST_PEER" ]; then
export FIRST_PEER=$(netstat --inet --numeric-hosts | grep ESTABLISHED | grep ssh | sed 's_.*ssh *\([0-9]*.[0-9]*.[0-9]*.[0-9]*\).*_\1_')
sudo service ntp stop
sudo ntpdate $FIRST_PEER
sudo service ntp start
fi

As in my previous excerpt for autoconfiguring the ROS_HOSTNAME and ROS_MASTER_URI, I use the power of sed to pull out the IP address of the first entry that netstat provides me. I then force ntpdate to synchronize with this peer, which is not very elegant right now, since I need to enter the account password every time I ssh in. The alternative would be to set ntpdate as a cron job, or call this same command in one of my initialization scripts, but I’m too lazy to do that now :P

Ewok Rampage and Fusion

After the ridiculously busy first semester I can finally get back to research. I managed to work on a couple of interesting projects meanwhile. In my ML project, which I might build on later this year, we investigate a hybrid reinforcement learning approach to non linear control http://www.youtube.com/watch?v=dMhP8oSPiWM. For the Manipulation project, we devised a ranking SVM approach to learn the preferability of performing actions given a cluttered scene to clear it autonomously using a robotic (Barett!) arm as closely as a human operator would (Paper).

Back on BIRD, I’m trying to fuse visual odometry and accelerometer data to get reasonably accurate state information for tracking the quadrotor outdoors. The folks out at Zurich have done a pretty neat job doing just that and I set about trying to reimplement that. For visual odometry they use a modified version of PTAM that uses a fixed number of maximum keyframes that are used to generate a local map. Whenever the camera moves out of the currently fully defined map enough to warrant the inclusion of an additional keyframe the oldest keyframe is popped. This gives a very nice trade-off between robust pose estimation and fast execution on board. They have also released an EKF implementation that uses the pose estimate provided by PTAM as an external update to the filter.

As is always the case, for the life of me I haven’t been able to replicate their results with our 3DM GX3 IMU. Turns out that their coordinate frames are inverted (Gravity supposedly points up) and may be left handed (:?) Net result being that although PTAM seems to give sensible updates, the state drifts away very quickly. However, since PTAM performs so admirably (it’s the first purely vision based system in my experience that works SO well), I decided to test out tracking using only PTAM in VICON just to see how good it is quantitatively. But getting it to run on the Pandaboard led me (as usual) to a whole host of issues, the highlight being a bug in ROS’ serialization library for ARM based processors X(

Anyway, on to the tips!
– Set the use_ros_time to be true for camera1394 drone to avoid Time is out of dual 32-bit range errors.
– For bus fault error code -7 use the patch from https://github.com/ros/roscpp_core/pull/8
– To automatically set up the ROS_MASTER_URI to point to the pandaboard and ROS_HOSTNAME to point to the IP address assigned to the laptop whenever the laptop connects to the pandaboard’s WiFi hotspot, add this nifty little bash script to the end of the ~/.bashrc
[Update 7/18 : Made this more robust, and takes into account connecting to flyingpanda using a directional antenna]

#Check if we're connected to the flyingpanda SSID
#Check if we're connected to the flyingpanda SSID
SSID=$(iwconfig wlan0|grep "ESSID:" | sed "s/.*ESSID:\"\(.*\)\".*/\1/") 2>/dev/null
case "$SSID" in
flyingpanda)
export ROS_HOSTNAME=$(ip addr show dev wlan0 | sed -e's/^.*inet \([^ ]*\)\/.*$/\1/;t;d') #192.168.2.84
PANDA=$(echo $ROS_HOSTNAME | sed 's/\.[0-9]*$/.1/')
export ROS_MASTER_URI=http://$PANDA:11311
echo "Detected connection to $SSID, setting ROS_HOSTNAME to $ROS_HOSTNAME and master to $ROS_MASTER_URI" ;;
*)
#For visibility of nodes run from this system when I use an external ros server
export ROS_HOSTNAME=$(ip addr show dev eth0 | sed -e's/^.*inet \([^ ]*\)\/.*$/\1/;t;d')
if [[ $ROS_HOSTNAME == "" ]]
then
export ROS_HOSTNAME=127.0.0.1
fi
base_ip=`echo $ROS_HOSTNAME | cut -d"." -f1-3`
if [[ $base_ip == "192.168.2" ]]
then
export ROS_MASTER_URI=http://192.168.2.1:11311
else
export ROS_MASTER_URI=http://$ROS_HOSTNAME:11311
fi
echo "Setting ROS_HOSTNAME to $ROS_HOSTNAME and master to $ROS_MASTER_URI"
esac

Essentially, I first check to see if I’m connected to the flyingpanda SSID, and then extract the IP address assigned to my laptop, and then export the required variables. Fun stuff. There is a lot to be said about sed. I found this page pretty helpful – http://www.grymoire.com/Unix/Sed.html

Tickling the Panda

So after having narrowly missed out on starting my Master’s and arriving three and a half weeks late thanks to Immigration hooplas, I finally resumed work on the Arducopter we ordered before I left for home last year. The idea to move to a separate platform was two-fold – first, to allow us to use fancier IMUs to dead reckon better, and second, to allow us to accurately timestamp captured images (using hardware triggered captures) and IMU data so as to help us accurately determine baselines while using SfM algorithms. There is also the added benefit of being able to process data on board using the dual cores on the Pandaboard, cutting the latency issues that also crept up while commanding the ARDrone over WiFi.

I’ve been fiddling around a bit with the ArduCopter source code, and realized that the inertial navigation has been designed to work in between periods of spotty GPS coverage, and not as a standalone solution, which is a perfectly sensible idea for the typical use cases of the ArduCopter. However, since we want to not necessarily depend on the GPS, I realized that all that was needed was a little faking of the GPS in the custom UserCode.pde sketch in the ArduCopter source code.

The good part about the ArduCopter implementing the MAVlink protocol is that I can receive all this information directly over serial/telemetry using pymavlink to create a ROS node that reuses all my previous code built for the ARDrone. I knew all that modular programming would come in handy some time ;) One of our major worries was implementing a reliable failsafe mechanism (and by failsafe, I imply dropping dead) and yet again, the beauty of the code being completely accessible came to the rescue again. So, when the client overrides raw RC channel values via MAVLink if the connection gets broken there’s no way for the RC to regain control of the drone. To fix this, I first ensured that I didn’t override channels 5,6 and 7, and then in the 50 Hz user hook I listened to the Ch 7 PWM values to detect flipping the switch and consequently disarmed the motors. I also set the Ch 6 slider to switch between Stabilise and Land so that I could perform a controlled land whenever I wanted to.

So, everything’s in place after a little help from the community, and hopefully I shall be following some trajectories sometime soon. With a tether, of course.

Meanwhile, here’s a little list of extra things I needed to do to get all the ROS packages working well on the PandaBoard after the basic install process. Specifically pcl_ros.

ROS packages on Pandaboard

• Install pcl_unstable from svn source
• For the #error in finding endianness, manually specify the endianness to be PCL_BIG_ENDIAN in the header file that throws the error (Crude hack, I know)
• To get pcl_ros to compile, the vtkfind cmake file is messed up. Patch with https://code.ros.org/trac/ros-pkg/attachment/ticket/5243/perception_pcl-1.0.1-vtk5.8.patch
• ROS_NOBUILDed ardrone2 since we don’t necessarily require the driver to compile. I hate that custom ffmpeg build stuff.
• ran rosrun tf bullet_migration_sed.py for ardrone_pid
• Compiled OpenCV from source, hacked around CMakelists.txt (explicitly added paths)
• Explicitly set ROS_MASTER_URIs and ROS_HOSTNAMEs on both machines when testing out nodes

boost is the secret of my energy!

One of the bigger advantages of python is ostensibly its ability to wrap C/C++ code and expose its functionality to normal python code. I was attempting to do exactly that recently, and got pretty frustrated at my inability to use boost::python to get my trajectory follower PID code library exposed to my python code so that I could write scripts for calibrating the ARDrone in the mocap lab since my tracking and plotting are done in python.
Right off the blocks, when I tried defining the simplest boost::python class out of my PIDController class with only the constructor, I kept receiving this compilation error of the compiler not being able to find the copy constructor definition. This was odd, since I had clearly declared only my parameterized constructor, and I had no clue where the copy consstructor was coming from. (The error message said something about boost::reference_ptr)
Turns out that by default boost::python expects your class to be copyable (i.e. it has a copy constructor and can be referenced by value), and so you need to clearly specify if the class is not meant to be copyable by adding a boost::noncopyable tag to your boost::python _class template declaration.

This done, I then realized that my code used tf::Vector3 and tf::Matrix3x3, and I would need to build a wrapper to convert the python provided numpy array to these types. I also needed to convert the list of waypoints from a numpy array to a vector of floats, as required in the library. Also, since my trajectory follower library also creates a ROS node, and needs to broadcast and receive tf data, I needed to initialise ROS using ros::init() before creating an object of my PIDController class, since it declared a ros::NodeHandle(), which requires a ros::init() already declared.

Thus, I created a Wrapper class with the sole member being a pointer object of PIDController. My constructor takes in the default arguments, calls ros::init(), and then allocates a new PIDController object to my pointer. I have another function that accepts a numpy array as a boost::python::numeric::array, figures out the length of the array using boost::python::extract and then uses it again to extract values and then push it to the PIDController instance.

To be able to import the generated module, I had to add my library path to PYTHONPATH, and in my CMakeLists I added python to the rosbuild_link_boost definition. Note that the name of the module and the generated library file should match exactly. Also, in order to get boost::python to understand that I was sending in a numpy ndarray, I had to declare so in my BOOST_PYTHON_MODULE.

BOOST_PYTHON_MODULE(libfollow_trajectory)

{
boost::python::numeric::array::set_module_and_type( "numpy", "ndarray");
class_<PidControllerWrapper, boost::noncopyable>("PidController", init<float, float, float, float, float, float, float, float>() );

}
Whew, and that’s it! It sure was longer and more complicated than I expected, but it works like a charm!

Also, working in mocap is fun! The ARDrone, however, is quite annoying. It drifts, ever so much. Here is an example of a series of plots I calculated while the controller tried to follow a parabolic curve. Funnily enough, you can see that the drone actually always drifts to the left of where it thinks it is. One of the reasons why I conducted this test was to determine if this behaviour was repeatable, and if so, then to take into account this drift, and use an iterative learning controller to understand what new trajectory to provide in order to get the drone to follow the intended path (which, in this case would be some trajectory to the right of the provided trajectory)
Image

Also, we are getting an arducopter! Can’t wait to get my hands on it! :D

The slashdot experience

So we managed to get the BIRD MURI video and page ‘slashdotted’ last week. I set up Google Analytics and got some fun stats, that Drew thought would be interesting to disseminate.
tl;dr

  • Web page and youtube get almost equal number of hits ~3K
  • Web page: 93% left before 10 seconds, Video: Average video view retention 1:24.
  • Retention percentage through the initial systems explanation and the the first 43 seconds of the video is > 90%.
  • Conclusions –

a. People prefer youtube videos to web page views

b. Video is much more effective at disseminating information and engaging a prospective audience.

c. Assume a much more global viewership than you normally would

d. 1:30 seems like a good length for a demo video.

In case you haven’t seen the video, here it is

Also, have a look at the slashdot post Some of the comments are fun :)

Raw stats

Since Thursday, 3,039 unique people visited the site, and only 54.62% from the US. The city with the most visitors from was London (1.9%). The next biggest viewerships were from Canada (8.9%), UK (6.3%) and Australia (5%)
Browsers – Chrome (45.9%), Firefox(34%), Safari(!)(10%)
OS – Windows (55%), Mac (19%), Linux (18.2%), iOS(4.4%)
The flip side : 93% people left before 10 seconds.
Also, I checked the youtube stats, which are much better.
We got 3,234 views, With an average view duration of 1:24 minutes (which is 72% of the video duration) (The flight video ends at 1:32 at 80% absolute retention)
Also, retention percentage through the initial systems explanation and the the first 43 seconds of the video is > 90%.
Which probably implies that the video is much more effective at disseminating information and engaging prospective audience.

Extract, color, wait and pickle

So, today I wanted to extract a copy of our ardrone 2.0 package from our repository as a separate hg repository, with all its history imported. A little bit of searching helped me get to the solution
a. Enable the convert extension for mercurial by adding the following lines to ~/.hgrc
[extensions]
hgext.convert=

b. Create a map file which would instruct hg on how to extract the subfolder
include cpp/bird_ros_pkgs/ardrone2
rename cpp/bird_ros_pkgs/ardrone2 .

the first line specifies the relative path of the folder I want to extract from the path of the repo. The next line redirects the contents of that folder to the root of the new repo (. refers to the root of the new repo)
c. Run the hg convert utility!
hg convert --filemap ~/map.txt ~/bird_repo ~/ardrone2_repo/

And that’s it!

Also, miscellaneous notes from October

1. Do not try to use colorgcc with ROS on ubuntu. It is a living nightmare. The issue lies in trying to symlink gcc and g++ to colorgcc and prepending the path to the symlinks to the PATH environment variable. However, many packages don’t really structure themselves well, and don’t seem to traverse through the PATH well. So, although for normal code colorgcc was working, it failed miserably for instance for my ardrone2 package, which uses custom (horrid cascading) makefiles.

2. The thesis theme for wordpress is pretty neat. Using the PHP hooks allows you a great amount of flexibility. Also, whenever using thesis, add the thesis_hooks plugin. It serves as a handy guide to view all the customisable hooks for versions before 2.0.

3. When working with tf, always waitForTransform()s before starting execution based on the transforms. Not doing so results in annoying exceptions. Also, rospy automatically spawns off threads for Subscribe and Publish. Keep that in mind, and use locks whenever necessary. Pickling does not work for objects with locks in them, so design classes that serve purely as data storage and pickle them through accessor classes that lock access to the instance.

4. In Simulink never run differentiation directly on a sensor input. Ideally, use a low pass filter, but for a quick estimation use a transfer function using the laplacian s/(1+2*pi*f*s). Also, try and replace algebraic loops involving differentiation with integration.

5. In Latex, to include multiple figures on the same line with their own captions (just like in my brand new SoP :D) use the subcaption package. Then include images like so
\begin{figure}[h]
\centering
\subcaptionbox*{\footnotesize{}}{\includegraphics[width = 0.30\linewidth, trim=[left] [top] [right] [bottom], clip]{[path_to_image]}}~
\subcaptionbox*{\footnotesize{}}{\includegraphics[width = 0.30\linewidth, trim=[left] [top] [right] [bottom], clip]{[path_to_image]}}~
...
\end{figure}

The ~ at the end specifies single unit space that Latex is not allowed to break a line at. I used 0.3 times the linewidth since I had three pictures in there.

Gawk!

Awk is awesome!

So I was trying to selectively execute a whole bunch of rosbags for a little bit of labelling, and decided to try out gawk a bit to help automate the loading process, rather than to manually type in the path name to each bag file. I was earlier using a shell script to run through every file ./*.bag and calling the variable from rxbag. However, when I wanted to resume my work I had to resume going through the files midway, and so had to use some pattern matching in there. Which is where awk came into the picture.

Anyway, handy tip – after some searching I finally found the required call, which turned out to be, not surprisingly, system(), to call external programs. But I had to send in the matched patterns as an argument to rxbag, so this is what I did

awk '/2012-05-17/ {system("rxbag " "\""$0"\"")}' ds.txt

So 2012-05-17 is the regex pattern I was looking for (The single quotes are to stop bash from reading into the stuff within), ds.txt contained a simple piped output of the list of files in my directory (ls *.bag > ds.txt) The fancy part is in the system call. Now since I had to send in a variable as an argument ($0 is the first line/field of the awk output) within the double quoted system call, I had to use the fancy double quote escape characters you see there "\"". And that was it! awk calls the code within the brackets for every match of the regex pattern.

Neat, eh?

Talking about things neat, here’s a fun video of the project I’m working on at the RI :D

Drone on!

Updates! So, among other things, I’m staying back on the BIRD project till December (The folks at UPenn graciously accepted my request to defer my admission by a semester) to do more fancy ML magic with quadrotors! :D (I promise loads of videos) The RISS program ended meanwhile, and I presented my very second academic research poster!

Among the things I’ve been working on this past July and August are working with the ARDrone 2.0, a very nifty off the shelf $300 quadrotor that can be controlled over WiFi. The first thing I set upon was reimplementing the ARDrone ROS driver for the second iteration of this drone, as the ardrone_brown ROS package wasn’t compatible with the 2.0. So started my arduous struggle of trying the 2.0 SDK to compile as a C++ ROS application, what with cascading make files and hundreds of preprocessor directives and undocumented code. Fortunately, with Andreas’ involvement in the project, we could make some sense of the SDK and got it to compile as a ROS driver. It turned out to be a complete rewrite and we managed to expose some additional functionalities of the SDK, namely a mode change for stable hover where the drone uses its downward facing camera to perform optical flow based stabilization amd publishing the IMU navdata at the native frequency (~200 Hz). To demonstrate the former we formulated a novel test we like to call The Kick Test

The first thing I attempted to implement was to get the ARDrone to follow trajectories as sets of waypoints. The major problem in implementing that was that the drone has no way to localise itself – the only feedback that we have are the IMU’s reading, and hence the only way to track movement was by integration, which meant unavoidable drift errors. An alternative could have been using the downward facing camera to perform optical flow, but since we don’t have access to the onboard board, the inherent latencies involved in streaming the video to the base station and sending modified controls back would have been prohibitively expensive. Fortunately, the planner that sends the trajectories replans after every delta movement and so as long as the drone doesn’t drift too much from the intended path, the error is accomodated in the next plan. Towards this end I implemented a PD controller that listened to tf data spewed by a tracker node. The tracker node listens to the navdata stream and continuously updates the drone’s frame’s pose wrt the origin frame. This is required since the command velocities sent to the drone are always relative to the drone’s frame of reference. After a few days of endless tweaking of the gains I finally managed to get reasonable waypoint following behaviour. You can get a short glance at the tracker tf data in RViz in this video


Here the origin is the starting location of the drone, drone_int is the current integrated location of the drone based on the IMU readings and lookahead distance is the location at the specified look ahead distance in the direction of the waypoint

However it was pretty jerky; whenever the drone reached a waypoint, it would suddenly speed up due to the significant change in error. Jerks are, of course, very bad since each jerk introduces a significant error peak as we integrate the velocities to obtain position. So, to mitigate this issue I implemented Drew’s suggested ‘dangling carrot’ approach – with the error term being restricted to a maximum look ahead distance along the direction to the closest waypoint. This finally resulted in the drone following trajectories pretty smoothly.

The next thing I’m working on now is to provide a more intuitive sense of control inputs to the classifier. The intuition being that a bird always heads to the most open region in an image, rather than avoiding oncoming obstacles by rolling right or left. Towards this end I implemented a new controller that allows the drone to be controlled via a mouse. The idea behind the visual controller is to get the drone to move along rays in the image – thus mouse clicks correspond to a pair of yaw and pitch angles. Since the drone can’t move forward while pitching upwards (it would then just move backwards), my controller ramps the altitude with the forward velocity proportional to the tan of the pitch angle. The next step is to now interface this new controller with the classifier and see what kind of results I obtain and possibly get a paper out (yay!).

I also beefed up the wxPython GUI that I worked on earlier the Summer to dynamically parse parameters from launch files (and launch files included within the launch files) and present all the parameters in one unified dialog to modify and save or load as required. This makes life much easier since a lot of parameters often need to be changed manually in the launch files during flight tests.

A sample screenshot of the parameters loaded from a launch file. The user can only launch a node if the parameters are loaded.

Exciting times ahead!

RI, CMU (And so should you!)

A lot’s happened over this month. I graduated (yay!), apparently, but I don’t get the degree until about 5 years. ( Shakes fist at DU. :-/ ) and arrived at the RI in the first week of June. Also, I couldn’t attend the Jed-i finals, but found a good Samaritan to present it in my stead. So here’s to you Sanjith, for helping me at least present my (reasonably incomplete) project at the competition! There was also a little heartburn – I got selected to the national interview of the NS Scholarship for $30K, but they were adamant that I was to be present in person at Mumbai on the 15th. No amount of talking or convincing helped there, and so I had to let it go. Sad.

Anyway, so out here the first couple of weeks and a half I dabbled in the untidy art of making a comprehensive wxPython GUI for starting all the ROS nodes that need to be fired while testing the MAV. Along the way, I figured out a lot of Python, and threading issues. One of the most annoying issue in implementing wxPython is that it requires all changes to the UI members to be done from its own (main) thread. So, for instance in my ROS image callbacks, I could not simply lock the running threads and assign the buffer data of the incoming openCV image to the staticBitmap – doing so let monsters run through the code, what with xlib errors and random crashes. After a lot of searching, I finally realised that ALL members should be driven ONLY through events, since the event methods are always called by wxPython’s main UI thread. Another interesting tidbit was to callAfter() the event, so that the event fired only after the subscriber thread’s control over the method left. It was also fun working with the subcommand process to initialize the ROS node binaries and using Python’s awesomeness to lookup the binary files in the ROS package path and generate dynamic drop down lists from it.

A first (and last) look at the GUI. The toggle buttons that are Red signify that the associated ROS node could not be fired successfully, and the Green one are those that could. The code keeps a track of the processes, and so even if the node is killed externally, it is reflected in the GUI.

The next thing I worked on over the past few days was trying to determine the Most Likely Estimate of the distribution of trees as a Poisson process. So the rationale behind it, as I understand it, is to have a better sense of planning paths beyond the currently visible trees in the 2D image obtained by the MAV.

The dataset provided is of the Flag Staff Hill at CMU and is a static pointcloud of roughly 2.5 million points. The intersection of the tree trunks with the parallel plane has been highlighted

So I had a dense pointcloud of Flag Staff hill that I used as my dataset, and after a simple ground plane estimation, I used a parallel plane 1.5 meters above this ground plane to segment out tree trunks, whose arcs (remember that the rangefinder can’t look around the tree) I then used in a simple contour detection process to produce a scatter plot of the trees. I then took this data and used approximate nearest neighbours to determine the distance to the farthest tree in a three tree neighbourhood. I then used the distance to characterize the area associated with each tree and its neighbours and used it for binning to determine the MLE. The MLE was then simply the arithmetic mean of these binned areas. I also wrote up a nice little report in Latex on this.

A plot of the recognized trees after the contour detection. Interestingly, notice the existence of tres in clumps. It was only later that I realized that the trees in the dataset were ghosted, with another tree offset by a small distance.

Not surprisingly, I’ve grown to like python a LOT! I could barely enjoy writing code in C++ for PCL after working on Python for just half a month. Possible convert? Maybe. I’ll just have to work more to find out its issues. /begin{rant} Also, for one of the best departments in CS in the US, the SCS admin is really sluggish in granting access. /end{rant}.

Get a grip on!

Following Rosen’s suggestion, I used the TaskManip interface to perform grasp planning, and here’s the first look at the ER4U grasping an object. Yes, you read that right, I’m getting agonizingly close!

The next thing I’m going to implement is introducing a couple of placeholders to define the start and end positions of the target.

Hopefully, I’ll also be able to add a IP based target tracking plug to later substitute for the placeholders in the simulation.