Submerged: Kinect Analysis

The Kinect had issues reading our dancer behind the screens. In order to quickly resolve this issue, we decided to place the Kinect on the side of the stage, and use Z-axis readings from the Kinect to control the X-axis movements in the projection. Unfortunately, we also had to sacrifice some of our planned interaction design. Although we had originally planned to allow the user control the size of the black void, the Kinect would not accurately follow the skeleton. Having the Kinect on the side also required a lot of calibration, especially regarding the vertical movements. This process was rather frustrating to troubleshoot. Ultimately, we realized the best course of action would be to record our dancer’s movements, and view the data in a graphical form.

realProj
Graph comparing Real World Y and Projected Y raw data from the Kinect.




Contrary to what we originally expected, the Projected Y value was more stable than the Real World Y value, as shown in the above graph. This led to more experimentation using the Projected Y value to find a mathematical relationship to more accurately determine the dancer’s position in relation to the projection.

walkingZ
Graph with data of user walking along the Z-Axis of the Kinect.

Ultimately, through more testing the Projected Y value of the dancer’s center of mass still proved to be unstable. The longer one stood in front of the Kinect, the lower the center of mass would drop. To get around this, we decided to analyze blob data by finding the highest detected user point. This proved to be a more consistent reading. By taking readings of the user walking flat on the floor away from the Kinect, we can find a relationship between the Z-axis readings and the Blob Y readings.

blobYvsZ
Graph used to find relationship between Blob Y data and Z data from the Kinect.

This graph shows a logarithmic relationship between the Z-axis readings and the Blob Y readings. Since the dancer’s height is constant in the real world, we can adjust this equation to estimate the dancer’s vertical height when she is standing on the boxes.

AdjLogOutput
Graph with adjusted log output vs. Z-data with user stepping onto boxes.

The above graph displays our analysis of the given data from the Kinect. Here, the steps taken on each box is quite evident. The logarithmic equation was then integrated into the Processing sketch.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s