Category: Updates


Ewok Rampage and Fusion

After the ridiculously busy first semester I can finally get back to research. I managed to work on a couple of interesting projects meanwhile. In my ML project, which I might build on later this year, we investigate a hybrid reinforcement learning approach to non linear control http://www.youtube.com/watch?v=dMhP8oSPiWM. For the Manipulation project, we devised a ranking SVM approach to learn the preferability of performing actions given a cluttered scene to clear it autonomously using a robotic (Barett!) arm as closely as a human operator would (Paper).

Back on BIRD, I’m trying to fuse visual odometry and accelerometer data to get reasonably accurate state information for tracking the quadrotor outdoors. The folks out at Zurich have done a pretty neat job doing just that and I set about trying to reimplement that. For visual odometry they use a modified version of PTAM that uses a fixed number of maximum keyframes that are used to generate a local map. Whenever the camera moves out of the currently fully defined map enough to warrant the inclusion of an additional keyframe the oldest keyframe is popped. This gives a very nice trade-off between robust pose estimation and fast execution on board. They have also released an EKF implementation that uses the pose estimate provided by PTAM as an external update to the filter.

As is always the case, for the life of me I haven’t been able to replicate their results with our 3DM GX3 IMU. Turns out that their coordinate frames are inverted (Gravity supposedly points up) and may be left handed (:?) Net result being that although PTAM seems to give sensible updates, the state drifts away very quickly. However, since PTAM performs so admirably (it’s the first purely vision based system in my experience that works SO well), I decided to test out tracking using only PTAM in VICON just to see how good it is quantitatively. But getting it to run on the Pandaboard led me (as usual) to a whole host of issues, the highlight being a bug in ROS’ serialization library for ARM based processors X(

Anyway, on to the tips!
– Set the use_ros_time to be true for camera1394 drone to avoid Time is out of dual 32-bit range errors.
– For bus fault error code -7 use the patch from https://github.com/ros/roscpp_core/pull/8
– To automatically set up the ROS_MASTER_URI to point to the pandaboard and ROS_HOSTNAME to point to the IP address assigned to the laptop whenever the laptop connects to the pandaboard’s WiFi hotspot, add this nifty little bash script to the end of the ~/.bashrc
[Update 7/18 : Made this more robust, and takes into account connecting to flyingpanda using a directional antenna]

#Check if we're connected to the flyingpanda SSID
#Check if we're connected to the flyingpanda SSID
SSID=$(iwconfig wlan0|grep "ESSID:" | sed "s/.*ESSID:\"\(.*\)\".*/\1/") 2>/dev/null
case "$SSID" in
flyingpanda)
export ROS_HOSTNAME=$(ip addr show dev wlan0 | sed -e's/^.*inet \([^ ]*\)\/.*$/\1/;t;d') #192.168.2.84
PANDA=$(echo $ROS_HOSTNAME | sed 's/\.[0-9]*$/.1/')
export ROS_MASTER_URI=http://$PANDA:11311
echo "Detected connection to $SSID, setting ROS_HOSTNAME to $ROS_HOSTNAME and master to $ROS_MASTER_URI" ;;
*)
#For visibility of nodes run from this system when I use an external ros server
export ROS_HOSTNAME=$(ip addr show dev eth0 | sed -e's/^.*inet \([^ ]*\)\/.*$/\1/;t;d')
if [[ $ROS_HOSTNAME == "" ]]
then
export ROS_HOSTNAME=127.0.0.1
fi
base_ip=`echo $ROS_HOSTNAME | cut -d"." -f1-3`
if [[ $base_ip == "192.168.2" ]]
then
export ROS_MASTER_URI=http://192.168.2.1:11311
else
export ROS_MASTER_URI=http://$ROS_HOSTNAME:11311
fi
echo "Setting ROS_HOSTNAME to $ROS_HOSTNAME and master to $ROS_MASTER_URI"
esac

Essentially, I first check to see if I’m connected to the flyingpanda SSID, and then extract the IP address assigned to my laptop, and then export the required variables. Fun stuff. There is a lot to be said about sed. I found this page pretty helpful – http://www.grymoire.com/Unix/Sed.html

boost is the secret of my energy!

One of the bigger advantages of python is ostensibly its ability to wrap C/C++ code and expose its functionality to normal python code. I was attempting to do exactly that recently, and got pretty frustrated at my inability to use boost::python to get my trajectory follower PID code library exposed to my python code so that I could write scripts for calibrating the ARDrone in the mocap lab since my tracking and plotting are done in python.
Right off the blocks, when I tried defining the simplest boost::python class out of my PIDController class with only the constructor, I kept receiving this compilation error of the compiler not being able to find the copy constructor definition. This was odd, since I had clearly declared only my parameterized constructor, and I had no clue where the copy consstructor was coming from. (The error message said something about boost::reference_ptr)
Turns out that by default boost::python expects your class to be copyable (i.e. it has a copy constructor and can be referenced by value), and so you need to clearly specify if the class is not meant to be copyable by adding a boost::noncopyable tag to your boost::python _class template declaration.

This done, I then realized that my code used tf::Vector3 and tf::Matrix3x3, and I would need to build a wrapper to convert the python provided numpy array to these types. I also needed to convert the list of waypoints from a numpy array to a vector of floats, as required in the library. Also, since my trajectory follower library also creates a ROS node, and needs to broadcast and receive tf data, I needed to initialise ROS using ros::init() before creating an object of my PIDController class, since it declared a ros::NodeHandle(), which requires a ros::init() already declared.

Thus, I created a Wrapper class with the sole member being a pointer object of PIDController. My constructor takes in the default arguments, calls ros::init(), and then allocates a new PIDController object to my pointer. I have another function that accepts a numpy array as a boost::python::numeric::array, figures out the length of the array using boost::python::extract and then uses it again to extract values and then push it to the PIDController instance.

To be able to import the generated module, I had to add my library path to PYTHONPATH, and in my CMakeLists I added python to the rosbuild_link_boost definition. Note that the name of the module and the generated library file should match exactly. Also, in order to get boost::python to understand that I was sending in a numpy ndarray, I had to declare so in my BOOST_PYTHON_MODULE.

BOOST_PYTHON_MODULE(libfollow_trajectory)

{
boost::python::numeric::array::set_module_and_type( "numpy", "ndarray");
class_<PidControllerWrapper, boost::noncopyable>("PidController", init<float, float, float, float, float, float, float, float>() );

}
Whew, and that’s it! It sure was longer and more complicated than I expected, but it works like a charm!

Also, working in mocap is fun! The ARDrone, however, is quite annoying. It drifts, ever so much. Here is an example of a series of plots I calculated while the controller tried to follow a parabolic curve. Funnily enough, you can see that the drone actually always drifts to the left of where it thinks it is. One of the reasons why I conducted this test was to determine if this behaviour was repeatable, and if so, then to take into account this drift, and use an iterative learning controller to understand what new trajectory to provide in order to get the drone to follow the intended path (which, in this case would be some trajectory to the right of the provided trajectory)
Image

Also, we are getting an arducopter! Can’t wait to get my hands on it! :D

The slashdot experience

So we managed to get the BIRD MURI video and page ‘slashdotted’ last week. I set up Google Analytics and got some fun stats, that Drew thought would be interesting to disseminate.
tl;dr

  • Web page and youtube get almost equal number of hits ~3K
  • Web page: 93% left before 10 seconds, Video: Average video view retention 1:24.
  • Retention percentage through the initial systems explanation and the the first 43 seconds of the video is > 90%.
  • Conclusions –

a. People prefer youtube videos to web page views

b. Video is much more effective at disseminating information and engaging a prospective audience.

c. Assume a much more global viewership than you normally would

d. 1:30 seems like a good length for a demo video.

In case you haven’t seen the video, here it is

Also, have a look at the slashdot post Some of the comments are fun :)

Raw stats

Since Thursday, 3,039 unique people visited the site, and only 54.62% from the US. The city with the most visitors from was London (1.9%). The next biggest viewerships were from Canada (8.9%), UK (6.3%) and Australia (5%)
Browsers – Chrome (45.9%), Firefox(34%), Safari(!)(10%)
OS – Windows (55%), Mac (19%), Linux (18.2%), iOS(4.4%)
The flip side : 93% people left before 10 seconds.
Also, I checked the youtube stats, which are much better.
We got 3,234 views, With an average view duration of 1:24 minutes (which is 72% of the video duration) (The flight video ends at 1:32 at 80% absolute retention)
Also, retention percentage through the initial systems explanation and the the first 43 seconds of the video is > 90%.
Which probably implies that the video is much more effective at disseminating information and engaging prospective audience.

Drone on!

Updates! So, among other things, I’m staying back on the BIRD project till December (The folks at UPenn graciously accepted my request to defer my admission by a semester) to do more fancy ML magic with quadrotors! :D (I promise loads of videos) The RISS program ended meanwhile, and I presented my very second academic research poster!

Among the things I’ve been working on this past July and August are working with the ARDrone 2.0, a very nifty off the shelf $300 quadrotor that can be controlled over WiFi. The first thing I set upon was reimplementing the ARDrone ROS driver for the second iteration of this drone, as the ardrone_brown ROS package wasn’t compatible with the 2.0. So started my arduous struggle of trying the 2.0 SDK to compile as a C++ ROS application, what with cascading make files and hundreds of preprocessor directives and undocumented code. Fortunately, with Andreas’ involvement in the project, we could make some sense of the SDK and got it to compile as a ROS driver. It turned out to be a complete rewrite and we managed to expose some additional functionalities of the SDK, namely a mode change for stable hover where the drone uses its downward facing camera to perform optical flow based stabilization amd publishing the IMU navdata at the native frequency (~200 Hz). To demonstrate the former we formulated a novel test we like to call The Kick Test

The first thing I attempted to implement was to get the ARDrone to follow trajectories as sets of waypoints. The major problem in implementing that was that the drone has no way to localise itself – the only feedback that we have are the IMU’s reading, and hence the only way to track movement was by integration, which meant unavoidable drift errors. An alternative could have been using the downward facing camera to perform optical flow, but since we don’t have access to the onboard board, the inherent latencies involved in streaming the video to the base station and sending modified controls back would have been prohibitively expensive. Fortunately, the planner that sends the trajectories replans after every delta movement and so as long as the drone doesn’t drift too much from the intended path, the error is accomodated in the next plan. Towards this end I implemented a PD controller that listened to tf data spewed by a tracker node. The tracker node listens to the navdata stream and continuously updates the drone’s frame’s pose wrt the origin frame. This is required since the command velocities sent to the drone are always relative to the drone’s frame of reference. After a few days of endless tweaking of the gains I finally managed to get reasonable waypoint following behaviour. You can get a short glance at the tracker tf data in RViz in this video


Here the origin is the starting location of the drone, drone_int is the current integrated location of the drone based on the IMU readings and lookahead distance is the location at the specified look ahead distance in the direction of the waypoint

However it was pretty jerky; whenever the drone reached a waypoint, it would suddenly speed up due to the significant change in error. Jerks are, of course, very bad since each jerk introduces a significant error peak as we integrate the velocities to obtain position. So, to mitigate this issue I implemented Drew’s suggested ‘dangling carrot’ approach – with the error term being restricted to a maximum look ahead distance along the direction to the closest waypoint. This finally resulted in the drone following trajectories pretty smoothly.

The next thing I’m working on now is to provide a more intuitive sense of control inputs to the classifier. The intuition being that a bird always heads to the most open region in an image, rather than avoiding oncoming obstacles by rolling right or left. Towards this end I implemented a new controller that allows the drone to be controlled via a mouse. The idea behind the visual controller is to get the drone to move along rays in the image – thus mouse clicks correspond to a pair of yaw and pitch angles. Since the drone can’t move forward while pitching upwards (it would then just move backwards), my controller ramps the altitude with the forward velocity proportional to the tan of the pitch angle. The next step is to now interface this new controller with the classifier and see what kind of results I obtain and possibly get a paper out (yay!).

I also beefed up the wxPython GUI that I worked on earlier the Summer to dynamically parse parameters from launch files (and launch files included within the launch files) and present all the parameters in one unified dialog to modify and save or load as required. This makes life much easier since a lot of parameters often need to be changed manually in the launch files during flight tests.

A sample screenshot of the parameters loaded from a launch file. The user can only launch a node if the parameters are loaded.

Exciting times ahead!

RI, CMU (And so should you!)

A lot’s happened over this month. I graduated (yay!), apparently, but I don’t get the degree until about 5 years. ( Shakes fist at DU. :-/ ) and arrived at the RI in the first week of June. Also, I couldn’t attend the Jed-i finals, but found a good Samaritan to present it in my stead. So here’s to you Sanjith, for helping me at least present my (reasonably incomplete) project at the competition! There was also a little heartburn – I got selected to the national interview of the NS Scholarship for $30K, but they were adamant that I was to be present in person at Mumbai on the 15th. No amount of talking or convincing helped there, and so I had to let it go. Sad.

Anyway, so out here the first couple of weeks and a half I dabbled in the untidy art of making a comprehensive wxPython GUI for starting all the ROS nodes that need to be fired while testing the MAV. Along the way, I figured out a lot of Python, and threading issues. One of the most annoying issue in implementing wxPython is that it requires all changes to the UI members to be done from its own (main) thread. So, for instance in my ROS image callbacks, I could not simply lock the running threads and assign the buffer data of the incoming openCV image to the staticBitmap – doing so let monsters run through the code, what with xlib errors and random crashes. After a lot of searching, I finally realised that ALL members should be driven ONLY through events, since the event methods are always called by wxPython’s main UI thread. Another interesting tidbit was to callAfter() the event, so that the event fired only after the subscriber thread’s control over the method left. It was also fun working with the subcommand process to initialize the ROS node binaries and using Python’s awesomeness to lookup the binary files in the ROS package path and generate dynamic drop down lists from it.

A first (and last) look at the GUI. The toggle buttons that are Red signify that the associated ROS node could not be fired successfully, and the Green one are those that could. The code keeps a track of the processes, and so even if the node is killed externally, it is reflected in the GUI.

The next thing I worked on over the past few days was trying to determine the Most Likely Estimate of the distribution of trees as a Poisson process. So the rationale behind it, as I understand it, is to have a better sense of planning paths beyond the currently visible trees in the 2D image obtained by the MAV.

The dataset provided is of the Flag Staff Hill at CMU and is a static pointcloud of roughly 2.5 million points. The intersection of the tree trunks with the parallel plane has been highlighted

So I had a dense pointcloud of Flag Staff hill that I used as my dataset, and after a simple ground plane estimation, I used a parallel plane 1.5 meters above this ground plane to segment out tree trunks, whose arcs (remember that the rangefinder can’t look around the tree) I then used in a simple contour detection process to produce a scatter plot of the trees. I then took this data and used approximate nearest neighbours to determine the distance to the farthest tree in a three tree neighbourhood. I then used the distance to characterize the area associated with each tree and its neighbours and used it for binning to determine the MLE. The MLE was then simply the arithmetic mean of these binned areas. I also wrote up a nice little report in Latex on this.

A plot of the recognized trees after the contour detection. Interestingly, notice the existence of tres in clumps. It was only later that I realized that the trees in the dataset were ghosted, with another tree offset by a small distance.

Not surprisingly, I’ve grown to like python a LOT! I could barely enjoy writing code in C++ for PCL after working on Python for just half a month. Possible convert? Maybe. I’ll just have to work more to find out its issues. /begin{rant} Also, for one of the best departments in CS in the US, the SCS admin is really sluggish in granting access. /end{rant}.

Get a grip on!

Following Rosen’s suggestion, I used the TaskManip interface to perform grasp planning, and here’s the first look at the ER4U grasping an object. Yes, you read that right, I’m getting agonizingly close!

The next thing I’m going to implement is introducing a couple of placeholders to define the start and end positions of the target.

Hopefully, I’ll also be able to add a IP based target tracking plug to later substitute for the placeholders in the simulation.

aar-arm

Well, as the title suggests, work on the Senior project has been progressing at a lazy pace, but a few developments have spurred me back in action. First, my project has been selected for the final 20 projects to compete at the Jed-I challenge. Second, the official project submission deadline is approaching. Fast.

So, developments. I’ve started work on a more structured manner – Have set up a git repo and a proper debuggable python environment within Eclipse. The last post showed the SCORBOT ER-4U robot building the ikfast TranslationDirection5D library database for itself. This ik solver is used since the robot has only 5 degrees of freedom, which severely curtails the available 6D space for effective manipulation. Hence this ik solver works from a given destination point and direction to approach it.

After playing around a bit with the python files I observed that very few grasps managed to return valid ik solutions. Even deleting the generated ikfast files and rebuilding the database didn’t help. What puzzled me the most was that even a very small displacement failed to return a solution.

On setting openrave to verbose mode it turned out that the wrist joint was colliding with the gripper:

Self collision: (ER-4U:wrist1)x(ER-4U:gripperLeft) contacts=0
[odecollision.h:687] selfcol ER-4U, Links wrist1 gripperLeft are colliding

So, here since the jaws are colliding whenever I use the cube’s coordinates as my target point, on an arbitrary valid target point, the arm moves correctly and along the specified direction.

I also need to adjust my CAD model as the wrist attachment’s top is interfering with the lower arm, and the grippers are actually more easily modelled as revolute joints, and not sliding joints as I had initially thought. I’ll also increase the initial gripper opening as suggested by Rosen here

UPDATE: It turned out that the collision was happening because I had a virtual wrist that rolled, to which the wrist attachment pitched up and down. So, what this was doing was that whenever the virtual wrist (wristZ) rolled, the pitching axis of the wrist attachment also changed, so that any pitching movement would lead to the wrist obliquely colliding with the forearm. To amend this, I split the wrist attachment into two parts, with first wristX pitching up and down, and wristZ then rolling about that axis (the grippers are attached to wristZ). Here’s a screenshot of the new model in action. This has also eliminated the occasional wobbling of the arm during a move.

The updated CAD model. Note that the grippers have become revolute joints, and the wrist attachment has been divided into two subparts – wristX is the projecting portion behind the differential gears and wristZ the part in front, to which the grippers are attached. Also note the small 1 mm clearance I introduced between the differential gears.

After Effects

‘Twas a balmy April
a Greek wedding
and its After Effects

All development work came to a screeching halt this April as my cousin got married. Amidst the hoopla and hype of an inter-national wedding, I got to dabble with Adobe After Effects to make a wedding presentation for the beautiful couple. I had been meaning to jump in for the longest time, but the intimidating UI (and more importantly, lack of time) held me back.

So after a day and a half’s worth of online tutorials and stumbling through the dark, here’s the final result!

After Effects definitely lived up to its hype and my expectations. There are loads of cool stuff in this presentation that I worked on – The intro image has been layered off to provide a 3D perspective (note how crudely I’ve stamp tooled the background), the photos are all staggered in 3D space and there’s a particle field in suspended growth which looks really nice with the motion blur. Manipulating the camera movement was a real pain, and loads of painstaking keyframing and readjusting the photos was required to get it to work (which is why at some places I was forced to give up). Shallow depth of field helps focus the attention well, and a random ‘camera shake’ provides a sense of dynamism to the presentation.

p.s. the title was shameless copied from a tutorial.

In other news, work on the senior project hasn’t really progressed as I finally had to start attending college (shakes fist at the final semester) as there are only a few weeks of studies remaining. I also winded up my intern at Hi Tech as I couldn’t really find any more time to go there. Pity that I couldn’t complete the work on the full scale setup that they now have :( (Shakes fist at the Autoexpo and its consequent delays)

Oh, and I’m heading to the RISS program again! Yay! :D I’ll be working under Prof. Drew Bagnell on the BIRDS project where the objective is to use ML magic to help MAVs navigate through densely cluttered environments like forests using only on board vision. *Very* interesting :)

Dis-arm-ament

Like I probably mentioned earlier, for my Senior project I’ve decided to work on visual servoing of a 5+1 DoF arm (It’s a SCORBOT ER-4u. Found one gathering dust in the CAM lab at college) The objective was to use only a monocular camera mounted on top of the gripper to servo the arm, but due to time constraints I shall be adding an overhead camera to obtain poses more accurately. This is a long overdue post, so brace yourself :P

One of the first things I decided on was to utilize the OpenRave framework – I had read a lot about it, and was even recommended to use it by Pras. The most nifty feature of OpenRave is its ikfast module, which is a ‘Robot Kinematics Compiler’ that analytically solves robot inverse kinematics equations and
generates optimized C++ files for later use that are highly reliable and fast. The performance of this very promising module seems phenomenal – ~5 ms evaluation time is what the official docs say.

So, then, the next step was implementing the ER-4U bot within OpenRave’s framework. I couldn’t really find CAD models that were useful (EDIT: I did find one on the SCORBOT Yahoo groups page, but it didn’t seem clean enough) and also since I wanted to model the robot myself I created a CAD model on SolidWorks. Now, OpenRave can recognize COLLADA files, and I was happy to see the SolidWorks plugin that did export to COLLADA, but the plugin version (1.4) was older than that which OpenRave supports (1.5) Although I could get the model to render in OpenRave by fiddling with the header, the issue was that OpenRave could not understand what the links and the joints were, so I couldn’t say, rotate the links about their joints.

A plain COLLADA 1.4 import into OpenRave. Note that there is no relationship defined between the links - If I start the physics engine, everything starts flying off.

Turns out that to get around issues like this, OpenRave allows you to define robots (that are essentially kinematic chains, or KinBodys ) where each sub part of the assembly is defined as a link. So, taking a cue from this blog I set about splitting the SCORBOT CAD module into separate WRLs and importing their geometries in my xml file. The WRLs then simply act as the textures used for collision detection and trajectory planning, while the underlying math model is derived from the link info that is added to the XML file.

On my first attempt at getting this working, I managed to get the joints in place, and the parts imported into OpenRave as separate entities, but with a very glaring error – the references between links that I used were in arbitrary orientations on Solidworks. Now, when OpenRave imports the WRLs, it just adds them to the scene in their native coordinates – thus giving the very staggered view as can be seen in the screenshot. Looking at other sample robots it implies that I’ll have to reorient all my sub-assemblies to the same local coordinates (like all pointing in the positive X direction) and to take measurements between the links with the entire arm, say, horizontal, or vertical.

As can be seen, because the relative distances I set in the XML file were according to a specific viewpoint in my overall assembly, and because OpenRave renders the sub assemblies in their local coordinates, all my parts are located with the wrong offsets. This will be remedied by taking measurements from an assembly wherein all the links are stretched out in one direction.

For the SCORBOT ER-4u, Intellitek has done away with the traditional serial interface with a proprietary
USB controller. This severely restricts use of third party applications to access the robot, as unlike with a serial port, commands cannot be simply sent via a terminal. Fortunately, some developers have been able to reverse assemble the DLL file and trace basic method calls (This file). Compiling against this dynamic library enables access to high level functions of the robot arm as well as access to servo PWMs. However, this will restrict implementation to the Windows environment. I’ll be looking at trying to run it under WINE on linux.

As far as interfacing with OpenRave is concerned, OpenRave provides a CPP interface and hence it is a
trivial task to connect the methods of the USBC DLL and OpenRave. However, I haven’t implemented that yet.

UPDATE: Things have been progressing – head over to the this OpenRave users list thread that I started. Rosen Diankov is awesome.

UPDATE 2: I’ve finally got it to start finding the IK solutions! Here’s a fun video of it in action :)

Peek a Boo!

My development machine here at Hi Tech being, well, low end, had been driving me crazy. Eclipse used to take eons to build (even by eclipse’s notorious standards), and rviz used to choke and sputter on my point clouds. Highly annoying.

My incessant pleas were soon enough heard and I was given access to a mean development rig, resplendent with a SSD and all the fireworks. The caveat being that I could only access it over the network – which meant a simple ssh -X command to access the bountiful resources. Not an issue, right? Wrong.

Turns out that ssh -X only routes drawing calls to the client, which means that all the rendering is done locally. So, although I could avail of the (marginally) better build times (remember, it’s eclipse, SSD notwithstanding. Immovable leviathan wins), I was back to square one.

My first tubelight moment was to try using vnc, and after some fiddling around managed to set up in my previous post (in this post. Looked great, and I could finally access the desktop remotely, with the screen frame buffer being sent across my low latency network. I happily run Gazebo only to see –
Missing GLX on Screen :0

As with all things linux, it really couldn’t have been such an obvious solution. This error occured because xvnc does not handle OpenGL calls to the screen. At this point I suddenly remembered my long tussle with bumblebee on my laptop, and looked up virtualGl, and sure enough, it seemed to be the panacea. So, I downloaded virtualGl binaries from their sourceforge page, and followed the user guide to install virtualGl.

Minor modifications:
1. From 11.10 (Oneiric) onwards, the default window manager is lightdm, and so instead of stopping gdm, service stop lightdm
2. After doing this, also rmmod nvidia if you have nvidia drivers installed. Unload the corresponding drivers for ati (fglrx?)
So, having installed the virtualGl binaries on both the server(remote machine) and the client(your terminal) and after configuring vglserver_config on the remote machine, relogin to the remote machine using vglconnect [userame]@[ip address], and voila! Everything will be setup by vglconnect. All you need to do then to execute any GL application is to prefix a vglrun to the command, e.g. I use vglrun rosrun rviz rviz.

So what goes on behind the scenes is that virtualGl loads a helper library whenever you call vglrun. This nifty little hook sits in the memory, and redirects all GL calls to and from the vglserver, which draws it onto the required buffer. So the application doesn’t even get to know what’s really going on. Neat, huh?
As a parting shot, here’s glxgears, running in it’s full ‘glory’.

Screenshot of VGL working over the network