Eric Maslowski, Technical Creative Consultant

Experiments Home

Kinect Hacks and Gesture Tests

The Kinect exploded on the gaming and natural user interface scene. People had it hacked within a few days and a collective desire to see how a depth sensing camera can be used was born. Caught up in the same energy the UM3D Lab started playing with the hacks coming out and seeing how they could be used with other technology. More specifically, we were working with an Architecture student at UofM, Robert Yuen, who was looking into the kinect as a poor-man's LIDAR scanner. We were also exploring it's use for natural user interfaces in virtual reality (video slide 2).

The Kinect has a few components of interest but one that I found most fascinating was the depth sensing camera. This allows one to get more information about a scene than before. The depth information comes in as a greyscale video feed which can be used to separate groups of pixels based on depth for additional processing (ex. identifying a person). Initially, we looked at hacks such as Brekel and FAAST which provide a quick way into the system. With these we tried mapping movements and poses to keyboard events which made it extremely easy to test in a virtual environment. We used the same functionality to do 3d sculpting using gestures as well as digital painting.

Microsoft has since given their blessing and released and SDK for the device but we found that it's range is far more limited compared with the hacks. It's user and skeleton recognition is far more robust, though. We're now looking tighter integration of the system, it's use in Augumented Reality, and multiple Kinects working together.

Special thanks to Ted Hall and Mathew Schwartz.