Tag Archive: updates


Peek a Boo!

My development machine here at Hi Tech being, well, low end, had been driving me crazy. Eclipse used to take eons to build (even by eclipse’s notorious standards), and rviz used to choke and sputter on my point clouds. Highly annoying.

My incessant pleas were soon enough heard and I was given access to a mean development rig, resplendent with a SSD and all the fireworks. The caveat being that I could only access it over the network – which meant a simple ssh -X command to access the bountiful resources. Not an issue, right? Wrong.

Turns out that ssh -X only routes drawing calls to the client, which means that all the rendering is done locally. So, although I could avail of the (marginally) better build times (remember, it’s eclipse, SSD notwithstanding. Immovable leviathan wins), I was back to square one.

My first tubelight moment was to try using vnc, and after some fiddling around managed to set up in my previous post (in this post. Looked great, and I could finally access the desktop remotely, with the screen frame buffer being sent across my low latency network. I happily run Gazebo only to see –
Missing GLX on Screen :0

As with all things linux, it really couldn’t have been such an obvious solution. This error occured because xvnc does not handle OpenGL calls to the screen. At this point I suddenly remembered my long tussle with bumblebee on my laptop, and looked up virtualGl, and sure enough, it seemed to be the panacea. So, I downloaded virtualGl binaries from their sourceforge page, and followed the user guide to install virtualGl.

Minor modifications:
1. From 11.10 (Oneiric) onwards, the default window manager is lightdm, and so instead of stopping gdm, service stop lightdm
2. After doing this, also rmmod nvidia if you have nvidia drivers installed. Unload the corresponding drivers for ati (fglrx?)
So, having installed the virtualGl binaries on both the server(remote machine) and the client(your terminal) and after configuring vglserver_config on the remote machine, relogin to the remote machine using vglconnect [userame]@[ip address], and voila! Everything will be setup by vglconnect. All you need to do then to execute any GL application is to prefix a vglrun to the command, e.g. I use vglrun rosrun rviz rviz.

So what goes on behind the scenes is that virtualGl loads a helper library whenever you call vglrun. This nifty little hook sits in the memory, and redirects all GL calls to and from the vglserver, which draws it onto the required buffer. So the application doesn’t even get to know what’s really going on. Neat, huh?
As a parting shot, here’s glxgears, running in it’s full ‘glory’.

Screenshot of VGL working over the network

Advertisements

Range find(h)er!

So the past few weeks I’ve been working on this really interesting project for creating a 3D mobile car environment in real time for teleop (and later autonomous movement). It starts with calibrating a 3D laser rangefinder’s data and a monocular camera’s feed which allows me, after the calibration is done, to map each coordinate in my image to a 3D point in the world coordinates (within the range of my rangefinder, of course). So, any obstacles coming up in the real world environs can then be rendered accurately in my immersive virtual environment. Now since everything’s in 3D the operator’s point of view can be moved around to any required orientation. So, for instance, while parking all that is needed to be done is to shift to an overhead view. In addition, since tele op is relatively laggy, the presence of this rendered environment gives the op a fair idea of the immediate environment, and the ability to continue along projected trajectories rather than stopping for resumption of connection.

So, as the SICK nodder was taking some time to arrive, I decide to play around Gazebo, and simulate the camera, 3D Laser rangefinder and the checkerboard pattern for calibration within it, and thus attempt to calibrate the camera-rangefinder pair from within the simulation. In doing so, I was finally forced to shift to ubuntu (Shakes fist at lazy programmers) which, although smoother, isn’t entirely as bug free as it is made out to be. So, I’ve created models for the camera and rangefinder and implemented gazebo dynamic plugins within them to simulate them. I’ve also spawned a checkerboard textured box, which using a keyboard manipulated node I move around the environment. So this is what it looks like right now

Screenshot of w.i.p.

So here, the environment is in the Gazebo window at the bottom, where I've (crudely) annotated the Checkerboard, camera and Laser Rangefinder. There are other bodies to provide a more interesting scan. The point cloud of the scan can be seen in the rviz window in the center, and the camera image at the right. Note the displacement between the two. The KeyboardOp node is running in the background at the left, listening for keystrokes, and a sample HighGui window displaying the detected Chessboard Corners at the top Left.

Looks optimistic!

Fedora 16

So yesterday I decided to preupgrade fc 15 to 16. And,as always, fedora didn’t disappoint in coming up with issues. Although it downloaded the required packages all fine, on reboot, anaconda fired up and fell through on not being able to install qt-examples, and since it represented a possible media failure from the (local) repo, it decided to fatal error on me, dropping to a non responsive screen.
Fortunately, I could shift to tty2 (Alt+F2) where I got anaconda’s bash. I chrooted to /mnt/sysimg to get to my fedora install, and yum removed qt-examples. This done, I rebooted to find anaconda chugging along fine.
As expected, that wasn’t the end. Since fc16 has implemented grub2 by default, anaconda tried to update grub, but because my Dell Utility partition didn’t leave any space at the beginning of the drive, it failed. So I booted up fc15, which fortunately had not been removed, and gparted the Dell utility partition off. I then made the grub2 config and installed it to /dev/sda. Success!
Not. So, I happily logged in to FC 16, and it was only when I accessed my yum repos, I realized that they were still fc15. After trying yum clean all and makecache, I looked up on the net to find that I needed to reinstall fedora-release. a yum reinstall fedora-release threw up an error "Error: Protected multilib versions: fedora-release-15-3.noarch != fedora-release-16-1.noarch"
After a lot of looking around, realized that the issue lay in the fact that somehow yum was getting $releasever wrong all the time. So, as a workaround, created a releasever file in /etc/yum/vars with 16 in it. Voila!
I then had to erase all the duplicates on my system that amounted to over 5.5 GB(!) using package-cleanup --cleandupes. Additionally, to  avoid dependency hell I had to remove all my boost library installs.

Update: You can follow the rest of the development on this page. I don’t like it.

Aar-duino

So just got done with the end sems, and I decided to get a positional fix from the BTC 3 axis Gimbal using the spare Ardupilot Mega lying in the lab. The final objective, is of course, to track any tagged object autonomously while the drone is in the air. To accomplish this, first I’ll establish a roll, pitch and yaw compensation system, wherein the camera points at the same location irrespective of the angular position of the plane, while it remains in the same position. The next step would be to track a moving object thus, while keeping the position constant, and the final (and most complicated) phase would be to track a moving object while the plane itself is in motion.

From what I’ve read, I’m thinking of using dense optical flow for the tracking-while-moving scenario. This would make the system purely reactionary, thus avoiding the need of  differential GPS and error prone altitude and distance measurements (for geo fixing). How well suited optical flow is to something like this is something I will be looking into.

So, went about programming the ArdupilotMega, and realized that I didn’t know what the pin number for the servo class object was. A bit of searching led me nowhere. I was able to correctly identify the pin to be PL5, but couldn’t see what pin number it corresponded to. A little browsing through the APM code led me to the lil’ devil in the APM_RC.h file. 45 it was then. Suddenly the excel sheet on the website made sense. The sequential numbers on the left hand side were the mapped pin numbers. :/

Things were fun after that, and then we set about trying to figure out how to move the servos on our PTZ camera. And then we got stuck on the math. After much ensuing debate, we shortlisted three very contrasting results. Not such a good thing.

Update: (New Year, yay!) So, figured it out while studying on my Senior project on robotic arms (Have I mentioned it yet? I intend to visually servo a 5+1 DoF arm). It is as simple as equating Euler rotations and Fixed rotations. Now, assuming the camera to be fixed to the plane, the plane will either pitch or roll (It hardly yaws, and we’ll be ignoring that for now). Now to compensate for the motion of this plane, I have to rotate my camera about two *other* orthonormal axes. So, after fixing my frames, I can, say, compensate for pitch easily, as the axis for pitching rotation is the same for the camera. However, for roll, the remaining camera axis is perpendicular to the axis of the plane’s roll. Hence, to get to the final position, one needs to perform two additional rotations about the two camera axes.
To summarize, We need to equate an XZX Euler rotation and a XYZ rotation, where rotation about Z is 0. Attached is my (very coarse) paper derivation.

Note, however, that this result is only mathematical. The actual camera assembly has motion constraints. So that needs to be worked out as well. Anyway, shouldn’t be too tough now.

Seeing Red

Unfortunately, due to the @!#$ internet connection, this blog post was, unfortunately, lost. Anyway, the jist of the post was that not much has been done since the last post, and work is proceeding at a snail’s pace. Also, a major error was detected and rectified wherein I had naively set the hue ranges to {1,180} rather than {0,180} because of the spike in the red colour in the Histogram. That, in turn, was because of the fact that the black background of the image (I simply extract the features of interest and place them on a blank image) , when transformed to the HSV colour space came up as hue 0, or Red. So, fixed it by using the mask to calculate the Histogram, and all was good.

The next thing to do is implementing the back propagation neural network. What’s been figured out is that the test images from the public database have to be scaled down to a smaller resolution – say 16×16, fed to some arbit number of hidden layer neurons, and then five output neurons, as we require 5 bits of information (26 letters + 10 digits).

I end, as the original post did, with – This is going to be one busy semester.

Cam-era Part II

First of all, a very happy New Year!

Right, now that we’re done with the niceties, moving on to the update, I have now managed to capture pictures from the camera through the USB cable on the beagleboard and transfer the images to my computer using a script running on the beagleboard. I use the gphoto2 command line interface and a simple hook script which rsync’s with a folder on my computer over the network. To avoid the password hassles I created a public key on the beagle and added it to the authorized_keys file in ~/.ssh/ (Of course there aren’t any security issues – Only I use the beagleboard)

However, the delay between consecutive pictures is quite significant (I timed it to ~3.6 seconds between snaps) and it is to be seen whether it would be adequate or not. (I feel that it won’t) I tried setting the mode to fixed aperture, landscape modes, etc. so that it doesn’t waste precious seconds on autofocussing, but no dice. Changing image resolution also didn’t help. So the only alternative seems to take pictures in burst mode and then transferring them. The trigger will have to be done via video(?), and there will be a massive delay between consecutive burst pics. The low resolution for such pictures isn’t too good a thing either. Another mode that the camera supports is a 5 second continuous shot buffer, but it is still to be seen if I can access the buffer on the fly (It seems unlikely though)

So, as matters  stand, unless a DSLR is used, it’s going to be tough trying to get high resolution snaps at short time intervals. Sad, but anyway, let’s save the verdict for the flight test.

Cam-era

Long time since the last post. Fell ill after the previous recce for the camera at the camere-wali-gali in Chandni Chowk. Work had stopped for the last week, and the flight test has been postponed for the first week of Jan, tentatively. As mentioned in the last post, had to look for compact cameras that supported remote capture, which turned out to be surprisingly challenging, as apparently no more compacts are sold with that feature. Perhaps the manufacturers deemed it a frivolous expense.

Anyway, our first choice was the G10, which really blurred the boundary between a DSLR and a compact, but to our dismay, it was nowhere to be found (It’s been long discontinued) After a lot of haggling we finally managed to net the Nikon P1 at a pretty decent bargain, and are now fiddling around with it. As an added bonus, the camera supports WiFi, but well, that’s unnecessary.

Using gPhoto on the command line is pretty straightforward, and the hook-script param is really helpful. All I need to do is add the path of my shell script that transfers the captured image to the ground station via scp, and voila! All the capture business is done. Alternately, I could execute any image processing code on the image before transferring it. But that will require some conversion first, perhaps (unless some settings can be tinkered with) as the images are being captured in .nef format, and I doubt if OpenCV accepts that format (I have to check that, though). Another thing to note is the massive delay between consecutive pictures ~2-3 seconds. I think it autofocuses each time, so if I can omit the autofocus bit, hopefully the camera will be able to shoot faster. Hm. Or possibly try the burst mode. Have to work on all this retrieval stuff for the moment.

ssh!

So, today’s work entailed getting file transfer working between the beagleboard and my linux box, getting the Axis camera communication over the network working and modifying the detection code to accommodate the smaller sizes of the shapes I had to detect. The code seems to be working okay for now, but a nagging issue is the disappearance of very small targets – that happens because of my Pyramid based mean shift filter blurring off the colour, because the image is so small. That will be fixed once we get better images. Also made a few rearrangements in the lab workspace setup, and now work with two monitors, keyboards and mice!

Output after modifiying the elemination criteria

So, realized that to get file transfer working between the two required a protocol of sorts. So tried using scp, only to be greeted with ‘Connection Refused’ errors. Realised that openssh-server needed to be installed on my box as well. Did that, yet no dice. Then realized that I had to get the ssh server up and running as well! So enabled the ssh service, ran it, and ensured that ssh was allowed in the system firewall list. A simple wget command on the beagleboard got the image to the beagleboard, and a subsequent scp command did the job in transferring the image to my linux box.

Now, the in the next flight test, what is intended is that the beagle will acquire images from the camera every second (or at a particular time interval) and then transmit that image down to the control station. Eventually, as and when we get the digicam/dslr, a similar job will be done by the board, in addition to any preprocessing/analysis required. What I’m thinking of doing is offloading part (or the entire processing job) onto the beagle. But that, of course, requires testing, and almost certainly optimizations. That seems some way off as of now however.

Also used opkg for the first time on the beagle. Don’t know what really is going on too well, but it seems like a lighter version of dpkg. A gcc download required a opkg update first, and it downloaded some .gzs. No gcc-c++ however. Surprisingly, however, opencv 2.1.0 was present, so installed it.

Next to do –

  1. Compiling the code on the beagle.
  2. Building a shell script/C code to automate the capture and transmit job (using scp). I realized that I could alternately simply use a rsync after each image grab. Will try, and decide.
  3. Working on the camera-PC interface. Will be getting a camera to experiment on tomorrow, so will see how to go about that.

Bugling the Beagle

The BeagleBoard’s here! Finally! So, after a day of messing around with the board, and trying to figure out how it worked, I am now, typing this very first blog post from within the Angstrom distribution loaded on the board! Woot!

In other related news, we had our first flight test this Saturday. The video feed was pretty good at 640×480, and the targets held up pretty well against the wind, contrary to ahem people’s expectations.Got a decent image of the star target, but the image quality in general was very poor. On switching to 1280 the video feed started lagging, and there was an issue with the antennas as well, so couldn’t test that thoroughly. However, it’s pretty evident that we need a better camera to get those stills. The Axis camera is just not making the cut – We couldn’t get it to focus at ~200ft, there is a LOT of spherical aberration, and the resolution wasn’t acceptable either. So, most probably we will be employing a still image camera in the next flight test, when we’ll couple the beagleboard to the camera.

One of the better captures on flyby

So, back to the Beagle Board.

Now, the very first thing was setting up minicom. That was pretty straight forward, and following instructions on the wiki, managed to get the serial comm working. Now the next part was checking the functioning of the board. So hooked up the null cable to the board, and connected it to a mini usb cable, and saw an entire boot up process, that eventually led me straight to a linux terminal (angstrom) over the minicom terminal. Encouraged by the result, I tried running it again with the display connected, only to be greeted by a Kernel Panic, and subsequent hung Uncompressing Linux… dialogs.

So, procured the MLO, u-boot and uImage files along with the Angstrom tarball from the Angstrom website. Formatted the SD card in the boot and ext3 partitions, copied the requisite stuff. Put everything together again and voila!

Points to be noted, then

  1. The default screen resolution is a VERY garish 640×480. It’s pretty exciting to look at initially, but is not workable. So, to go around this, after much searching, figured out that it is at the preboot stage (when the uLoader asks for a keypress to stop autoboot) that we assign the dvimode to the resolution of our requirement. So, it means a simple boot.scr (edited in vi) containing
    setenv dvimode 1024x768MR-16@60
    run loaduimage
    run mmcboot

    and you’re done!
  2. The SD Card reader jackets (the micro to SD card converters) are VERY unreliable. DO NOT trust them. Ever. Go ahead with the much simpler and reliable netconnect modems. If obtaining junk characters, check to see if the COM cable is tightly attached, and that the SD card has the MLO file, the uImage and u-boot.bin file in the boot partition.
  3. Plug the HDMI to DVI cable before plugging in the power. Also, get a power supply of 5V, and around 2A. An old Axis adapter fit the bill perfectly. Also plug in peripherals before plugging the power. The mini-USB cable is not really required then.
  4. Connecting the board to the network is easy enough. In the network connection applet, set the IPs manually, and set IPv6 to automatic. That gets the internet working.
  5. #beagle is your friend on freenode.

Now, as the beagleboard is up and running, the next task is to get opencv (and consequently the code) working on it. Hm. Also, will probably be looking at building customized boot images of Angstrom. Let’s see over the coming days.

And it’s done!

Yes! Finally! I got the code working on linux!

-Drum roll-

Had to configure Code::Blocks. There were minor hiccups in configuring Code::Blocks, name ly the include libraries. How I managed to get the code running was by

  1. Creating a new console project in Code::Blocks
  2. Going to Project -> Build options, and in Linker Settings, added the whole gamut of library files (in the ‘Other Linker options’). For the sake of completeness, they were -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_ml -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_objdetect -lopencv_contrib -lopencv_legacy -lopencv_flann
  3. In the Search Settings, added /usr/local/include/opencv/ for the compiler tab, and /usr/local/lib/ for the linker tab
  4. The next step involved copying all my source files and headers in the project directory, and including them in the project. And that’s it!
  5. EDIT: That, apparently, is not it. To locate the shared libraries the LD_LIBRARY_PATH environment variable needs to be set to the path of the opencv lib directory –  export LD_LIBRARY_PATH=/usr/local/lib/

So, with that done finally, we can move on with

  1. Porting this code to the BeagleBoard/SBC
  2. Further development work, most notably getting the letter and shape recognition neural networks working. That shouldn’t take too much effort – the new interfaces can be explored.
  3. Updating the code to C++ according to the new framework. Now that would involve considerable (re-)learning.

And, here is an output of the code on the Raven logo! (Yes, loads of work is unfinished. But things are looking good! )

The first output of my code on Linux!