Cubieboard Fileserver + Kinect

So I’ve recently bought a Cubieboard to use it as a file server. If you look closely at this posts featured image (or if you’ve already clicked on the link) you might notice why I picked this particular embedded board. Exactly, it has a SATA port for connecting a standard HDD or SSD to it. This should offer a huge performance improvement over USB connected HDD’s (albeit the Cubieboard still only has a 100MBit/s Ethernet interface).

I will describe what I’ve done so far with the Cubieboard and the Kinect, but it’s all on the “Hello World” level, so this post will mainly serve as a reminder for me, so that I don’t forget the basics.

About the Cubieboard

This board is only slightly more expensive than a Raspberry Pi ($49 instead of $35 – although both prices highly depend on the country you live in and the distributor you buy those devices from – ), but comes with much more processing power (A10 @ 1GHz) with twice the RAM (1 GB) and on-board Flash ROM (4 GB – preloaded with Android 4). It also has analog Audio In/Out, HDMI out, 2x USB Host, micro SD slot (useful for booting another OS off an SD card), IR receiver, the aforementioned SATA port and 96 IO pins that include pins for I2C, SPI, RGB/LVDS, CSI/TS, FM-IN, ADC, CVBS, VGA, SPDIF-OUT, R-TP, etc…

So it’s a pretty neat package and it’s almost a shame to just waste it as a file server. So I wanted to give all that unused processing power something useful to do and decided to hook up a Microsoft Kinect.

About the Kinect

By now the Kinect should be a well known device to all of you. It’s release made significant waves in the hacker/maker community, because for the first time there was an inexpensive 3D vision sensor that could be used for oh so many things besides its intended purpose (gaming on the XBox 360). Everyone who is interested in all the interesting projects that have been done using a Kinect I refer you to the Kinecthacks website.

So the Kinects main features are:

  • Laser Vision based depth field mapping with a resolution of 640×480 pixels
  • RGB Color Video Stream with a resolution of 640×480 pixels
  • Servo to tilt the Kinect sensor head
  • 3 axis accelerometer

One might wonder why the Kinect includes the tilt servo and the accelerometer, my guess is that the Microsoft Kinect developers never intended this device to be limited to XBox gaming, because to me those features only really make sense when you’re building a robot.

My Efforts so far

Fortunately the Cubieboard developers provide a Ubuntu/Linaro image that you can just flash to a micro SD card. The Cubieboard will boot up Android without a micro SD card inserted and when there is a bootable SD card present it will boot that. Thanks to the modern Linux distribution the installation of the required packages is as easy as can be. On the console as root just do the following:

sudo apt-get install freenect python-freenect python-numpy python-opencv

After this you’ll have to disable the kernel driver for Kinect so that freenect can claim the interfaces.

sudo modprobe -r gspca_kinect
sudo modprobe -r gspca_main
echo "blacklist gspca_kinect" >> /etc/modprobe.d/blacklist.conf

And that’s all, now the Kinect can be plugged into one of the Cubieboards USB Ports and then you can start to access it via Python.

Today I’ve managed to read the depth information from the 3D sensor, read images from the RGB Cam, tilt the sensor head and do some manipulations on the retrieved data with OpenCV. To start using Kinect with OpenCV in Python we start by importing all the necessary modules.

import freenect, cv, numpy

Reading Depth Data

Reading the depth data from the Kinect sensor is rather simple:

data = freenect.sync_get_depth()
data = data[0].astype(numpy.uint8)

Now if you (unlike me) have a monitor attached to your Cubieboard (or maybe you’re doing this on some other type of Computer) you should be able to directly display the depth data you’ve just captured with OpenCV like so:

cvMat = cv.fromstring(data)
cv.ShowImage("depth", cvMat)

Otherwise you might want to save that depth image to disc, so you can look at it on another computer:

cvMat = cv.fromstring(data)
cv.SaveImage("depth.png", cvMat)

The result might look like this:

depth_image

You can easily manipulate image data with OpenCV, for example this code draws a circle at the center of the image (320, 240) with the color black (0,0,0) and a radius of 50 Pixels:

# first convert cvMat to iplImage
img = cv.CreateImage(cv.GetSize(cvMat), cv.IPL_DEPTH_8U, 1)  # cv.GetSize() returns a tuple, in this case (640,480) ; For a depth image we only have one color channel, hence the ,1 at the end
cv.SetData(img, cvMat.tostring())
# now draw a Circle on that image
cv.Circle(img, (320,240), 50, (0,0,0), 0, 8, 0)
# Save to file
cv.SaveImage("depth_manipulated.png", img)

And the result:

depth_image_manipulated

Now you can easily do the same thing with the RGB data (although we have 3 channels of color for RGB, as opposed to only one for depth). Now an interesting thing would be to overlay the depth image with the RGB image and see how that looks. For convenience we first define a couple of functions to grab the data we need.

def getrgb():
    data = freenect.sync_get_video()
    data = data[0].astype(numpy.uint8)
    cvMat = cv.fromarray(data)
    img = cv.CreateImage(cv.GetSize(cvMat), cv.IPL_DEPTH_8U, 3) #3 channels for RGB color!
    cv.SetData(img, cvMat.tostring())
    return img

def getdepth():
    data = freenect.sync_get_depth()
    data = data[0].astype(numpy.uint8)
    cvMat = cv.fromarray(data)
    img = cv.CreateImage(cv.GetSize(cvMat), cv.IPL_DEPTH_8U, 1) #1 channel for depth!
    cv.SetData(img, cvMat.tostring())
    return img

def newimg(size):
    return cv.CreateImage(size, cv.IPL_DEPTH_8U, 3) #create an empty image with 3 color channels

As you can see, grabbing a video frame from the Kinect is as simple as grabbing the depth data. The main difference (data wise) is that the RGB video comes with 3 channels of color, while the depth data comes with only one channel of color. I emphasize on this difference because you run into problems, when you want to combine both images. Before you can do that you’ll need to convert the depth data into an image that also has 3 channels of color. So here is the code to combine both images.

depth = getdepth()
rgb = getrgb()
dstDepthImg = newimage(cv.GetSize(depth))
finalImage = newimage(cv.GetSize(depth))
cv.Merge(depth, depth, depth, None,dstDepthImg) #this is basically assigning the 1 channel of the depth image to all 3 channels of the empty dstDepthImg
# and finally adding the rgb image to the reformatted depth image
cv.Add(dstDepthImg, rgb, finalImage)
cv.SaveImage("combined_data.png", finalImage)

The result then might look like this:

depth_plus_rgb

All rather basic examples of how to use Kinect and OpenCV, but I think it’s not bad for a first “Hello World” Session.

And now for a final piece of code I’ll show you how to write a few frames of data into a video file.

format = cvFOURCC('M', 'J', 'P', 'G') # MJPEG - Motion JPEG - encoding
fps = 24  #frames per second
resolution = (640,480)  # this is the resolution that cv.GetSize() will always return when working with data from the Kinect
# first create a video writer instance that'll write the video to output.avi
writer = cv.CreateVideoWriter("output.avi", format, fps, resolution)
# now write 100 frames in a loop and draw an ever growing circle on each frame
for frame in xrange(0,100):
    img = getrgb()
    cv.Circle(img, (320,240), frame, (255,255,0), 0, 8, 0)
    cv.WriteFrame(writer, img)

I won’t show you how that video looks, as I don’t find it significant enough to upload to YouTube at this point. 😉

Conclusions

So it is possible and fairly easy to get a Kinect running with the Cubieboard. Now my intended goal for this is to have the Kinect track hands and fingers and enable my living room table to become a touch input device, with a GUI projected onto its surface from above. This would effectively turn my living room table into a remote control for the Raspbmc (Raspberry Pi XBMC distribution) under my TV and for the RGB LED lights that light up my living room. (Basically I’m in the process of building my own home automation system. The lights were first, they can already be controlled in color and brightness from any Android device, but there’s a lot more to come, so posts about it will have to wait).

I’ll see if the Cubieboard is powerful enough to use the Kinect for touch input recognition and will post updates on this topic once I have the time to experiment with this again.

Flattr this!

5 thoughts on “Cubieboard Fileserver + Kinect

  1. Ajith

    Hello DP,
    I got my cubieboard 2 and i have also installed whatever u have mentioned above. Im using a cubian desktop now. I was able to run the basic libfreenect examples of open kinect . But when i try to open kinect in a python shell i get an error like this ” Invalid Index Can’t open device”.I also blacklisted gspca _kinect and gspca_main but even after that i get the same error. I get this error once i give the command freenect.sync_get_depth(). i dono how to proceed. Can u provide me with all the dependencies required and also the version of OS that you have used so that itl be useful for me to follow accordingly.

    Reply
  2. Ajith

    Hello DP,
    Im Ajith from India. I have been struggling past one month trying to connect my xbox kinect with BeagleBone Black. Finally i found that its impossible. After a humungous research i found your post which enlightened my hopes of connecting kinect with an arm processor of BeagleBone’s size and cost. Im planning to buy a Cubieboard2 , but before that i need some assurity from you regarding the possibility of getting depth data using kinect with Cubieboard and the dependencies which are required to start the process. My email id is [deleted by admin for privacy].

    Thank you for your time, and I hope to speak with you soon.
    Best,

    V.Ajith

    Reply
    1. D P

      Hello Ajith,

      as far as I know the BeagleBone Black comes with a pre installed Linux OS, this is not the case with the Cubieboard / Cubieboard2 / Cubietruck. Those boards come pre installed with Android. For the Cubieboard 1 I’ve used a microSD card to boot a pretty decent Version of Ubuntu. From there it’s a simple matter of doing an “apt-get install” to install all the neccessary Software packages. The Kinect actually did work out of the box (drivers are included in the 3.x Kernels). I also have a Cubietruck (although at this point not yet tested with the Kinect), which is powered by the same processor as the Cubieboard 2 (Allwinner A20). Here I installed the Ubuntu distribution directly into the on board flash memory (they’re providing a very good flashing utility on their website for their A20 boards). I’m currently unsure if there’s out of the box support for the Kinect on those newer boards, but even if it doesn’t it should be a simple matter of setting up a tool chain and rolling your own version of their Linux with the neccessary drivers and software built in.

      Reply
      1. Ajith

        thanks a lot for the reply D P!!!!! Now i have my cubieboard 2 with me .. Im pretty sure that if kinect s supported by cubieboard then its possible by cubieboard2 . Now what i want to know is to what extent have you experimented kinect with cubieboard ??? have u tried the openni samples ??? Cos im planning to process the depth image so if the openni samples are working then i can modify the samples according to my need… If possible i request you to send me email id so that itl be more easy for both to communicate .. Sorry for the disturbance…

        V.Ajith

        Reply
  3. Daniel Seagren

    Hello DP,
    My name is Daniel Seagren and I am currently a mechanical engineering senior at Columbia University in New York City. For my senior capstone design project, I am building a motion tracking basketball passing system that tracks user position (in 3 dimensions) and adjusts machine position and ball delivery speed to accurately pass a ball to the user’s location. I’ve been researching various micro-controllers and microPCs to be able to analyze and translate user location data from the Kinect into motor control signals. I was wondering if you would be able to answer some of my questions in regards to using the Cubieboard2 as the controller via email. My email address is [deleted by admin for privacy]

    Thank you for your time, and I hope to speak with you soon.
    Best,

    Daniel Seagren

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *



Time limit is exhausted. Please reload CAPTCHA.