Archive for December, 2010


Long time since the last post. Fell ill after the previous recce for the camera at the camere-wali-gali in Chandni Chowk. Work had stopped for the last week, and the flight test has been postponed for the first week of Jan, tentatively. As mentioned in the last post, had to look for compact cameras that supported remote capture, which turned out to be surprisingly challenging, as apparently no more compacts are sold with that feature. Perhaps the manufacturers deemed it a frivolous expense.

Anyway, our first choice was the G10, which really blurred the boundary between a DSLR and a compact, but to our dismay, it was nowhere to be found (It’s been long discontinued) After a lot of haggling we finally managed to net the Nikon P1 at a pretty decent bargain, and are now fiddling around with it. As an added bonus, the camera supports WiFi, but well, that’s unnecessary.

Using gPhoto on the command line is pretty straightforward, and the hook-script param is really helpful. All I need to do is add the path of my shell script that transfers the captured image to the ground station via scp, and voila! All the capture business is done. Alternately, I could execute any image processing code on the image before transferring it. But that will require some conversion first, perhaps (unless some settings can be tinkered with) as the images are being captured in .nef format, and I doubt if OpenCV accepts that format (I have to check that, though). Another thing to note is the massive delay between consecutive pictures ~2-3 seconds. I think it autofocuses each time, so if I can omit the autofocus bit, hopefully the camera will be able to shoot faster. Hm. Or possibly try the burst mode. Have to work on all this retrieval stuff for the moment.


So, today’s work entailed getting file transfer working between the beagleboard and my linux box, getting the Axis camera communication over the network working and modifying the detection code to accommodate the smaller sizes of the shapes I had to detect. The code seems to be working okay for now, but a nagging issue is the disappearance of very small targets – that happens because of my Pyramid based mean shift filter blurring off the colour, because the image is so small. That will be fixed once we get better images. Also made a few rearrangements in the lab workspace setup, and now work with two monitors, keyboards and mice!

Output after modifiying the elemination criteria

So, realized that to get file transfer working between the two required a protocol of sorts. So tried using scp, only to be greeted with ‘Connection Refused’ errors. Realised that openssh-server needed to be installed on my box as well. Did that, yet no dice. Then realized that I had to get the ssh server up and running as well! So enabled the ssh service, ran it, and ensured that ssh was allowed in the system firewall list. A simple wget command on the beagleboard got the image to the beagleboard, and a subsequent scp command did the job in transferring the image to my linux box.

Now, the in the next flight test, what is intended is that the beagle will acquire images from the camera every second (or at a particular time interval) and then transmit that image down to the control station. Eventually, as and when we get the digicam/dslr, a similar job will be done by the board, in addition to any preprocessing/analysis required. What I’m thinking of doing is offloading part (or the entire processing job) onto the beagle. But that, of course, requires testing, and almost certainly optimizations. That seems some way off as of now however.

Also used opkg for the first time on the beagle. Don’t know what really is going on too well, but it seems like a lighter version of dpkg. A gcc download required a opkg update first, and it downloaded some .gzs. No gcc-c++ however. Surprisingly, however, opencv 2.1.0 was present, so installed it.

Next to do –

  1. Compiling the code on the beagle.
  2. Building a shell script/C code to automate the capture and transmit job (using scp). I realized that I could alternately simply use a rsync after each image grab. Will try, and decide.
  3. Working on the camera-PC interface. Will be getting a camera to experiment on tomorrow, so will see how to go about that.

Bugling the Beagle

The BeagleBoard’s here! Finally! So, after a day of messing around with the board, and trying to figure out how it worked, I am now, typing this very first blog post from within the Angstrom distribution loaded on the board! Woot!

In other related news, we had our first flight test this Saturday. The video feed was pretty good at 640×480, and the targets held up pretty well against the wind, contrary to ahem people’s expectations.Got a decent image of the star target, but the image quality in general was very poor. On switching to 1280 the video feed started lagging, and there was an issue with the antennas as well, so couldn’t test that thoroughly. However, it’s pretty evident that we need a better camera to get those stills. The Axis camera is just not making the cut – We couldn’t get it to focus at ~200ft, there is a LOT of spherical aberration, and the resolution wasn’t acceptable either. So, most probably we will be employing a still image camera in the next flight test, when we’ll couple the beagleboard to the camera.

One of the better captures on flyby

So, back to the Beagle Board.

Now, the very first thing was setting up minicom. That was pretty straight forward, and following instructions on the wiki, managed to get the serial comm working. Now the next part was checking the functioning of the board. So hooked up the null cable to the board, and connected it to a mini usb cable, and saw an entire boot up process, that eventually led me straight to a linux terminal (angstrom) over the minicom terminal. Encouraged by the result, I tried running it again with the display connected, only to be greeted by a Kernel Panic, and subsequent hung Uncompressing Linux… dialogs.

So, procured the MLO, u-boot and uImage files along with the Angstrom tarball from the Angstrom website. Formatted the SD card in the boot and ext3 partitions, copied the requisite stuff. Put everything together again and voila!

Points to be noted, then

  1. The default screen resolution is a VERY garish 640×480. It’s pretty exciting to look at initially, but is not workable. So, to go around this, after much searching, figured out that it is at the preboot stage (when the uLoader asks for a keypress to stop autoboot) that we assign the dvimode to the resolution of our requirement. So, it means a simple boot.scr (edited in vi) containing
    setenv dvimode 1024x768MR-16@60
    run loaduimage
    run mmcboot

    and you’re done!
  2. The SD Card reader jackets (the micro to SD card converters) are VERY unreliable. DO NOT trust them. Ever. Go ahead with the much simpler and reliable netconnect modems. If obtaining junk characters, check to see if the COM cable is tightly attached, and that the SD card has the MLO file, the uImage and u-boot.bin file in the boot partition.
  3. Plug the HDMI to DVI cable before plugging in the power. Also, get a power supply of 5V, and around 2A. An old Axis adapter fit the bill perfectly. Also plug in peripherals before plugging the power. The mini-USB cable is not really required then.
  4. Connecting the board to the network is easy enough. In the network connection applet, set the IPs manually, and set IPv6 to automatic. That gets the internet working.
  5. #beagle is your friend on freenode.

Now, as the beagleboard is up and running, the next task is to get opencv (and consequently the code) working on it. Hm. Also, will probably be looking at building customized boot images of Angstrom. Let’s see over the coming days.

Go, fetch!

Alright, so today’s work entailed grabbing images from the axis camera over the network. After a little while got to know that a simple wget command to the http address documented on the camera manual did the job. To implement this, wrote out a system() dialog fetching the image into the working directory. Also, removed previous instances of image.jpg before doing that, else the consequent frames were captured as image.jpg.1 , .2 etc.
The flight test’s tomorrow, and the targets have come up pretty well. The modular design is especially pretty cool. However there are still doubts over their survivability and their ability to stay put on the windy airfield, especially as we didn’t manage to get adequate nails to secure them.
Had a late night scare as the generator threatened to literally pull the plug off the test, but the it’s still on, as some replacement is due. So got nothing new really to do, except work on the code a bit.
A nagging issue is that the code behaves weirdly if it doesn’t get correct shapes to qualify as letters. Specifically, asserts fail as it doesn’t receive any image in the hue extraction methods. So, will probably work on error control there. And that’s that for the day/night.

Target practice (or lack of it)

After yesterday’s good work output, today was less of a worthwhile effort at the lab. Managed to make just one target (A green rhombus), and tried testing the axis camera.
Which brings me to my rant – the axis cameras are annoying to the extreme! First, more often thatn not, they fail to respond to my browser requests, and when they do, after repeated plugging/unplugging of lan cables (and muttered curses) they show the video for a few seconds and then just – crash.
I think that this had something to do with the fact that I’ve set the resolution at 1280, as I remember that setting it at 640×480 didn’t give me any issues. Am at a loss as to how to fix things. Perhaps its because the router. I’ll have to test and see.
Also tried out the bullet wireless transceivers. The camera on the plane managed to send video feed back, and the embedded people heaved a sigh of relief. If they only knew what’s wrong with the other camera…

And it’s done!

Yes! Finally! I got the code working on linux!

-Drum roll-

Had to configure Code::Blocks. There were minor hiccups in configuring Code::Blocks, name ly the include libraries. How I managed to get the code running was by

  1. Creating a new console project in Code::Blocks
  2. Going to Project -> Build options, and in Linker Settings, added the whole gamut of library files (in the ‘Other Linker options’). For the sake of completeness, they were -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_ml -lopencv_video -lopencv_features2d -lopencv_calib3d -lopencv_objdetect -lopencv_contrib -lopencv_legacy -lopencv_flann
  3. In the Search Settings, added /usr/local/include/opencv/ for the compiler tab, and /usr/local/lib/ for the linker tab
  4. The next step involved copying all my source files and headers in the project directory, and including them in the project. And that’s it!
  5. EDIT: That, apparently, is not it. To locate the shared libraries the LD_LIBRARY_PATH environment variable needs to be set to the path of the opencv lib directory –  export LD_LIBRARY_PATH=/usr/local/lib/

So, with that done finally, we can move on with

  1. Porting this code to the BeagleBoard/SBC
  2. Further development work, most notably getting the letter and shape recognition neural networks working. That shouldn’t take too much effort – the new interfaces can be explored.
  3. Updating the code to C++ according to the new framework. Now that would involve considerable (re-)learning.

And, here is an output of the code on the Raven logo! (Yes, loads of work is unfinished. But things are looking good! )

The first output of my code on Linux!

Moving on to GNU/Linux

As had been envisioned much earlier, the move to Linux has started. As expected, it looked intimidating at first, but patient reading of the build docs and fixing the network back in the lab (read always-on internet) has allowed me to get the latest version of OpenCV (2.2) and build it. It required quite a few development packages, and although I also downloaded IPP and TBB, which seem to be good things (i.e., they’ll help run the code faster) I don’t know as yet if they would work on the BeagleBoard as and when it arrives. Anyway, the first priority being running the existing OpenCV2.0 code, I had to go about making the OpenCV build.

A few things that were learnt in the process:

  1. Fedora (yes, I prefer Fedora to Ubuntu – it’s more cutting edge, imho) does not come with gcc installed. I precluded its not being there in the vanilla build.
  2. gcc-c++ has to be installed as well. Even after installing gcc, cmake kept crying over some missing CXX_Compiler. A quick google search got me the solution to my quandary. Makes me wonder why they aren’t in the dependency list for cmake anyway.
  3. Having completed the build, I happily set about trying to compile the sample code using gcc. After repeated failures I realized that the apostrophes used in the configuring arguments are tilted (`) Whoa. Never seen anything like that earlier.
  4. I also realized that I had installed the repository version of OpenCV, which was still configured in cflags and clib. Erased it, only to realize that my PKG_CONFIG_PATH paths I added using in the 2.2 install process didn’t work.  Went about rebuilding the OpenCV code and performing all the OpenCV installation all over again. No dice. Haw.
  5. Figured out that the install path is at /usr/local/lib. So, that’s where to point PKG_CONFIG_PATH.
  6. Another annoying thing. After closing the terminal window, pkg-config’s PATH reverted to its previous value. Fixed it by copying the generated opencv.pc file in /usr/local/lib to /usr/lib64/pkgconfig along with the other default .pc files. Always loaded. Yay.