Latest Entries »

Sound the Alarms!

Finally done with the drudgery of undergraduate exams!

And, with a happy coincidence, I managed to get my XPS 17 L702x laptop speakers working on ubuntu 10.04

So, in case you’re in the same boat as I am, here’s how to go about it
a. Enable the backports repository in Software Packages
b. Determine your kernel version using uname -a
c. Install the appropriate linux-backports-modules-alsa-$(uname -r)
d. Enjoy the awesomeness! :)

How this works is that the reasonably current development versions of libraries like ALSA are, as the name suggests, back-ported to run on 10.04. Similarly, there’s a compat-wireless backport as well, which might solve your wireless issues as well.

Unfortunately, I can’t seem to find a way to get full HD resolution working. Although going through the Xorg log, EDID information is read, it can’t parse it.

In other news, I really have to rush development on the Senior project :S

Also, to be able to push to github through my college firewall I used a SSL port (If you can access https sites, you’re good to go) that the good guys at github at opened up, thusly (I always wanted to use this word!)
a. ~/.ssh/config
Host github
User git
Port 443
Hostname ssh.github.com
ServerAliveInterval 10
IdentityFile /home/kshaurya/.ssh/id_rsa

b. Changed project/.git/config hostname from github.com to the new hostname, github

Advertisements

aar-arm

Well, as the title suggests, work on the Senior project has been progressing at a lazy pace, but a few developments have spurred me back in action. First, my project has been selected for the final 20 projects to compete at the Jed-I challenge. Second, the official project submission deadline is approaching. Fast.

So, developments. I’ve started work on a more structured manner – Have set up a git repo and a proper debuggable python environment within Eclipse. The last post showed the SCORBOT ER-4U robot building the ikfast TranslationDirection5D library database for itself. This ik solver is used since the robot has only 5 degrees of freedom, which severely curtails the available 6D space for effective manipulation. Hence this ik solver works from a given destination point and direction to approach it.

After playing around a bit with the python files I observed that very few grasps managed to return valid ik solutions. Even deleting the generated ikfast files and rebuilding the database didn’t help. What puzzled me the most was that even a very small displacement failed to return a solution.

On setting openrave to verbose mode it turned out that the wrist joint was colliding with the gripper:

Self collision: (ER-4U:wrist1)x(ER-4U:gripperLeft) contacts=0
[odecollision.h:687] selfcol ER-4U, Links wrist1 gripperLeft are colliding

So, here since the jaws are colliding whenever I use the cube’s coordinates as my target point, on an arbitrary valid target point, the arm moves correctly and along the specified direction.

I also need to adjust my CAD model as the wrist attachment’s top is interfering with the lower arm, and the grippers are actually more easily modelled as revolute joints, and not sliding joints as I had initially thought. I’ll also increase the initial gripper opening as suggested by Rosen here

UPDATE: It turned out that the collision was happening because I had a virtual wrist that rolled, to which the wrist attachment pitched up and down. So, what this was doing was that whenever the virtual wrist (wristZ) rolled, the pitching axis of the wrist attachment also changed, so that any pitching movement would lead to the wrist obliquely colliding with the forearm. To amend this, I split the wrist attachment into two parts, with first wristX pitching up and down, and wristZ then rolling about that axis (the grippers are attached to wristZ). Here’s a screenshot of the new model in action. This has also eliminated the occasional wobbling of the arm during a move.

The updated CAD model. Note that the grippers have become revolute joints, and the wrist attachment has been divided into two subparts – wristX is the projecting portion behind the differential gears and wristZ the part in front, to which the grippers are attached. Also note the small 1 mm clearance I introduced between the differential gears.

After Effects

‘Twas a balmy April
a Greek wedding
and its After Effects

All development work came to a screeching halt this April as my cousin got married. Amidst the hoopla and hype of an inter-national wedding, I got to dabble with Adobe After Effects to make a wedding presentation for the beautiful couple. I had been meaning to jump in for the longest time, but the intimidating UI (and more importantly, lack of time) held me back.

So after a day and a half’s worth of online tutorials and stumbling through the dark, here’s the final result!

After Effects definitely lived up to its hype and my expectations. There are loads of cool stuff in this presentation that I worked on – The intro image has been layered off to provide a 3D perspective (note how crudely I’ve stamp tooled the background), the photos are all staggered in 3D space and there’s a particle field in suspended growth which looks really nice with the motion blur. Manipulating the camera movement was a real pain, and loads of painstaking keyframing and readjusting the photos was required to get it to work (which is why at some places I was forced to give up). Shallow depth of field helps focus the attention well, and a random ‘camera shake’ provides a sense of dynamism to the presentation.

p.s. the title was shameless copied from a tutorial.

In other news, work on the senior project hasn’t really progressed as I finally had to start attending college (shakes fist at the final semester) as there are only a few weeks of studies remaining. I also winded up my intern at Hi Tech as I couldn’t really find any more time to go there. Pity that I couldn’t complete the work on the full scale setup that they now have :( (Shakes fist at the Autoexpo and its consequent delays)

Oh, and I’m heading to the RISS program again! Yay! :D I’ll be working under Prof. Drew Bagnell on the BIRDS project where the objective is to use ML magic to help MAVs navigate through densely cluttered environments like forests using only on board vision. *Very* interesting :)

Dis-arm-ament

Like I probably mentioned earlier, for my Senior project I’ve decided to work on visual servoing of a 5+1 DoF arm (It’s a SCORBOT ER-4u. Found one gathering dust in the CAM lab at college) The objective was to use only a monocular camera mounted on top of the gripper to servo the arm, but due to time constraints I shall be adding an overhead camera to obtain poses more accurately. This is a long overdue post, so brace yourself :P

One of the first things I decided on was to utilize the OpenRave framework – I had read a lot about it, and was even recommended to use it by Pras. The most nifty feature of OpenRave is its ikfast module, which is a ‘Robot Kinematics Compiler’ that analytically solves robot inverse kinematics equations and
generates optimized C++ files for later use that are highly reliable and fast. The performance of this very promising module seems phenomenal – ~5 ms evaluation time is what the official docs say.

So, then, the next step was implementing the ER-4U bot within OpenRave’s framework. I couldn’t really find CAD models that were useful (EDIT: I did find one on the SCORBOT Yahoo groups page, but it didn’t seem clean enough) and also since I wanted to model the robot myself I created a CAD model on SolidWorks. Now, OpenRave can recognize COLLADA files, and I was happy to see the SolidWorks plugin that did export to COLLADA, but the plugin version (1.4) was older than that which OpenRave supports (1.5) Although I could get the model to render in OpenRave by fiddling with the header, the issue was that OpenRave could not understand what the links and the joints were, so I couldn’t say, rotate the links about their joints.

A plain COLLADA 1.4 import into OpenRave. Note that there is no relationship defined between the links - If I start the physics engine, everything starts flying off.

Turns out that to get around issues like this, OpenRave allows you to define robots (that are essentially kinematic chains, or KinBodys ) where each sub part of the assembly is defined as a link. So, taking a cue from this blog I set about splitting the SCORBOT CAD module into separate WRLs and importing their geometries in my xml file. The WRLs then simply act as the textures used for collision detection and trajectory planning, while the underlying math model is derived from the link info that is added to the XML file.

On my first attempt at getting this working, I managed to get the joints in place, and the parts imported into OpenRave as separate entities, but with a very glaring error – the references between links that I used were in arbitrary orientations on Solidworks. Now, when OpenRave imports the WRLs, it just adds them to the scene in their native coordinates – thus giving the very staggered view as can be seen in the screenshot. Looking at other sample robots it implies that I’ll have to reorient all my sub-assemblies to the same local coordinates (like all pointing in the positive X direction) and to take measurements between the links with the entire arm, say, horizontal, or vertical.

As can be seen, because the relative distances I set in the XML file were according to a specific viewpoint in my overall assembly, and because OpenRave renders the sub assemblies in their local coordinates, all my parts are located with the wrong offsets. This will be remedied by taking measurements from an assembly wherein all the links are stretched out in one direction.

For the SCORBOT ER-4u, Intellitek has done away with the traditional serial interface with a proprietary
USB controller. This severely restricts use of third party applications to access the robot, as unlike with a serial port, commands cannot be simply sent via a terminal. Fortunately, some developers have been able to reverse assemble the DLL file and trace basic method calls (This file). Compiling against this dynamic library enables access to high level functions of the robot arm as well as access to servo PWMs. However, this will restrict implementation to the Windows environment. I’ll be looking at trying to run it under WINE on linux.

As far as interfacing with OpenRave is concerned, OpenRave provides a CPP interface and hence it is a
trivial task to connect the methods of the USBC DLL and OpenRave. However, I haven’t implemented that yet.

UPDATE: Things have been progressing – head over to the this OpenRave users list thread that I started. Rosen Diankov is awesome.

UPDATE 2: I’ve finally got it to start finding the IK solutions! Here’s a fun video of it in action :)

Peek a Boo!

My development machine here at Hi Tech being, well, low end, had been driving me crazy. Eclipse used to take eons to build (even by eclipse’s notorious standards), and rviz used to choke and sputter on my point clouds. Highly annoying.

My incessant pleas were soon enough heard and I was given access to a mean development rig, resplendent with a SSD and all the fireworks. The caveat being that I could only access it over the network – which meant a simple ssh -X command to access the bountiful resources. Not an issue, right? Wrong.

Turns out that ssh -X only routes drawing calls to the client, which means that all the rendering is done locally. So, although I could avail of the (marginally) better build times (remember, it’s eclipse, SSD notwithstanding. Immovable leviathan wins), I was back to square one.

My first tubelight moment was to try using vnc, and after some fiddling around managed to set up in my previous post (in this post. Looked great, and I could finally access the desktop remotely, with the screen frame buffer being sent across my low latency network. I happily run Gazebo only to see –
Missing GLX on Screen :0

As with all things linux, it really couldn’t have been such an obvious solution. This error occured because xvnc does not handle OpenGL calls to the screen. At this point I suddenly remembered my long tussle with bumblebee on my laptop, and looked up virtualGl, and sure enough, it seemed to be the panacea. So, I downloaded virtualGl binaries from their sourceforge page, and followed the user guide to install virtualGl.

Minor modifications:
1. From 11.10 (Oneiric) onwards, the default window manager is lightdm, and so instead of stopping gdm, service stop lightdm
2. After doing this, also rmmod nvidia if you have nvidia drivers installed. Unload the corresponding drivers for ati (fglrx?)
So, having installed the virtualGl binaries on both the server(remote machine) and the client(your terminal) and after configuring vglserver_config on the remote machine, relogin to the remote machine using vglconnect [userame]@[ip address], and voila! Everything will be setup by vglconnect. All you need to do then to execute any GL application is to prefix a vglrun to the command, e.g. I use vglrun rosrun rviz rviz.

So what goes on behind the scenes is that virtualGl loads a helper library whenever you call vglrun. This nifty little hook sits in the memory, and redirects all GL calls to and from the vglserver, which draws it onto the required buffer. So the application doesn’t even get to know what’s really going on. Neat, huh?
As a parting shot, here’s glxgears, running in it’s full ‘glory’.

Screenshot of VGL working over the network

xvnc

Situation: You need to access a powerful linux machine remotely from another remote machine, with everything running on the remote machine (unlike a ssh -X, in which objects are rendered on your machine), while someone else is *already* working on that machine and doesn’t want to be distrubed.

Solution :
1. Install vncserver on remote machine, vncviewer on client
2. For the host machine running gnome and metacity window manager (e.g. ubuntu, fedora), edit ~/.vnc/xstartup to be something like this

#!/bin/sh

# Uncomment the following two lines for normal desktop:
#exec /etc/X11/xinit/xinitrc

[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
unset SESSION_MANAGER
gnome-session&
metacity&

(Thanks to http://forums.fedoraforum.org/archive/index.php/t-201885.html )
3. On host machine, run vncserver. A first time setup will run you through a password you want to assign
It’ll then start a vncserver. e.g. at New 'X' desktop is OCUPC:1
OCUPC being host name, and :1 the screen the server is attached to
4. On client machine, run vncviewer . Ta-da!

Picture n’ Pose

So, as I mentioned in my previous post, I am working on recreating a 3D photorealistic environment by mapping image data onto my pointcloud in realtime. Now, the camera and the laser rangefinder can be considered as two separate frames of reference looking at the same point in the world. After calibration, i.e. after finding the rotational and translational relationship (a matrix) between the two frames, I can express (map) any point in one frame to the other. So, here I first project the 3D point cloud points onto the camera’s frame of reference. Then, selecting only the indices which fall within the range of the image (because the laser rangefinder has a much wider field of view than the camera. The region that the camera captures is a subset of the region acquired by the rangefinder. So, I’ll get corresponding indices, say from negative coordinates like (-200,-300) to (800,600) for a 640×480 image) Hence, I only need to colour the 3D points which lie within the (0,0) and (640,480) indices using the RGB values at the corresponding image pixel. The resultant XYZRGB pointcloud is then published, which is what you see here. Obviously, since the spatial resolution of the laser rangefinder is much less than the camera’s, the resulting output is not as dense, and requires interpolation, which is what I am working on right now.

Head on to the images for a more detailed description. They’re fun!

Screenshot of the Calibration Process

So here, first I create a range image from the point cloud, and display it in a HighGUI window for the user to select corner points. As I already know how I have projected the 3D laser data onto the 2D image, I can remap from the clicked coordinates to the actual 3D point in the laser rangefinder data. The corresponding points are selected on the image similarly, and the code then moves onto the next image with detected chessboard corners, till a specified number of successful grabs are accomplished.

Screenshot of the result of extrinsic calibration

Here's a screenshot of Rviz showing the cloud in action. Notice the slight bleed along the left edge of the checkerboard. That's because of issues in selection of corner points in the range image. Hopefully in the real world calibration, we might be able to use glossy print checkerboard so that the rays bounce off the black squares, giving us nice holes in the range image to select. Another interesting thing to note is the 'projection' of the board on the ground. That's because the checkerboard is actually occluding the region on the ground behind it, and so the code faithfully copies whatever value it finds on the image corresponding to the 3D coordinate.

View after VGL

So, after zooming out, everything I mentioned becomes clearer. What this does is that it effectively increases the perceived resolution. The projection bit on the ground is also very apparent here.

Range find(h)er!

So the past few weeks I’ve been working on this really interesting project for creating a 3D mobile car environment in real time for teleop (and later autonomous movement). It starts with calibrating a 3D laser rangefinder’s data and a monocular camera’s feed which allows me, after the calibration is done, to map each coordinate in my image to a 3D point in the world coordinates (within the range of my rangefinder, of course). So, any obstacles coming up in the real world environs can then be rendered accurately in my immersive virtual environment. Now since everything’s in 3D the operator’s point of view can be moved around to any required orientation. So, for instance, while parking all that is needed to be done is to shift to an overhead view. In addition, since tele op is relatively laggy, the presence of this rendered environment gives the op a fair idea of the immediate environment, and the ability to continue along projected trajectories rather than stopping for resumption of connection.

So, as the SICK nodder was taking some time to arrive, I decide to play around Gazebo, and simulate the camera, 3D Laser rangefinder and the checkerboard pattern for calibration within it, and thus attempt to calibrate the camera-rangefinder pair from within the simulation. In doing so, I was finally forced to shift to ubuntu (Shakes fist at lazy programmers) which, although smoother, isn’t entirely as bug free as it is made out to be. So, I’ve created models for the camera and rangefinder and implemented gazebo dynamic plugins within them to simulate them. I’ve also spawned a checkerboard textured box, which using a keyboard manipulated node I move around the environment. So this is what it looks like right now

Screenshot of w.i.p.

So here, the environment is in the Gazebo window at the bottom, where I've (crudely) annotated the Checkerboard, camera and Laser Rangefinder. There are other bodies to provide a more interesting scan. The point cloud of the scan can be seen in the rviz window in the center, and the camera image at the right. Note the displacement between the two. The KeyboardOp node is running in the background at the left, listening for keystrokes, and a sample HighGui window displaying the detected Chessboard Corners at the top Left.

Looks optimistic!

Fedora 16

So yesterday I decided to preupgrade fc 15 to 16. And,as always, fedora didn’t disappoint in coming up with issues. Although it downloaded the required packages all fine, on reboot, anaconda fired up and fell through on not being able to install qt-examples, and since it represented a possible media failure from the (local) repo, it decided to fatal error on me, dropping to a non responsive screen.
Fortunately, I could shift to tty2 (Alt+F2) where I got anaconda’s bash. I chrooted to /mnt/sysimg to get to my fedora install, and yum removed qt-examples. This done, I rebooted to find anaconda chugging along fine.
As expected, that wasn’t the end. Since fc16 has implemented grub2 by default, anaconda tried to update grub, but because my Dell Utility partition didn’t leave any space at the beginning of the drive, it failed. So I booted up fc15, which fortunately had not been removed, and gparted the Dell utility partition off. I then made the grub2 config and installed it to /dev/sda. Success!
Not. So, I happily logged in to FC 16, and it was only when I accessed my yum repos, I realized that they were still fc15. After trying yum clean all and makecache, I looked up on the net to find that I needed to reinstall fedora-release. a yum reinstall fedora-release threw up an error "Error: Protected multilib versions: fedora-release-15-3.noarch != fedora-release-16-1.noarch"
After a lot of looking around, realized that the issue lay in the fact that somehow yum was getting $releasever wrong all the time. So, as a workaround, created a releasever file in /etc/yum/vars with 16 in it. Voila!
I then had to erase all the duplicates on my system that amounted to over 5.5 GB(!) using package-cleanup --cleandupes. Additionally, to  avoid dependency hell I had to remove all my boost library installs.

Update: You can follow the rest of the development on this page. I don’t like it.

Aar-duino

So just got done with the end sems, and I decided to get a positional fix from the BTC 3 axis Gimbal using the spare Ardupilot Mega lying in the lab. The final objective, is of course, to track any tagged object autonomously while the drone is in the air. To accomplish this, first I’ll establish a roll, pitch and yaw compensation system, wherein the camera points at the same location irrespective of the angular position of the plane, while it remains in the same position. The next step would be to track a moving object thus, while keeping the position constant, and the final (and most complicated) phase would be to track a moving object while the plane itself is in motion.

From what I’ve read, I’m thinking of using dense optical flow for the tracking-while-moving scenario. This would make the system purely reactionary, thus avoiding the need of  differential GPS and error prone altitude and distance measurements (for geo fixing). How well suited optical flow is to something like this is something I will be looking into.

So, went about programming the ArdupilotMega, and realized that I didn’t know what the pin number for the servo class object was. A bit of searching led me nowhere. I was able to correctly identify the pin to be PL5, but couldn’t see what pin number it corresponded to. A little browsing through the APM code led me to the lil’ devil in the APM_RC.h file. 45 it was then. Suddenly the excel sheet on the website made sense. The sequential numbers on the left hand side were the mapped pin numbers. :/

Things were fun after that, and then we set about trying to figure out how to move the servos on our PTZ camera. And then we got stuck on the math. After much ensuing debate, we shortlisted three very contrasting results. Not such a good thing.

Update: (New Year, yay!) So, figured it out while studying on my Senior project on robotic arms (Have I mentioned it yet? I intend to visually servo a 5+1 DoF arm). It is as simple as equating Euler rotations and Fixed rotations. Now, assuming the camera to be fixed to the plane, the plane will either pitch or roll (It hardly yaws, and we’ll be ignoring that for now). Now to compensate for the motion of this plane, I have to rotate my camera about two *other* orthonormal axes. So, after fixing my frames, I can, say, compensate for pitch easily, as the axis for pitching rotation is the same for the camera. However, for roll, the remaining camera axis is perpendicular to the axis of the plane’s roll. Hence, to get to the final position, one needs to perform two additional rotations about the two camera axes.
To summarize, We need to equate an XZX Euler rotation and a XYZ rotation, where rotation about Z is 0. Attached is my (very coarse) paper derivation.

Note, however, that this result is only mathematical. The actual camera assembly has motion constraints. So that needs to be worked out as well. Anyway, shouldn’t be too tough now.