The Kinect had issues reading our dancer behind the screens. In order to quickly resolve this issue, we decided to place the Kinect on the side of the stage, and use Z-axis readings from the Kinect to control the X-axis movements in the projection. Unfortunately, we also had to sacrifice some of our planned interaction design. Although we had originally planned to allow the user control the size of the black void, the Kinect would not accurately follow the skeleton. Having the Kinect on the side also required a lot of calibration, especially regarding the vertical movements. This process was rather frustrating to troubleshoot. Ultimately, we realized the best course of action would be to record our dancer’s movements, and view the data in a graphical form.
Contrary to what we originally expected, the Projected Y value was more stable than the Real World Y value, as shown in the above graph. This led to more experimentation using the Projected Y value to find a mathematical relationship to more accurately determine the dancer’s position in relation to the projection.
Ultimately, through more testing the Projected Y value of the dancer’s center of mass still proved to be unstable. The longer one stood in front of the Kinect, the lower the center of mass would drop. To get around this, we decided to analyze blob data by finding the highest detected user point. This proved to be a more consistent reading. By taking readings of the user walking flat on the floor away from the Kinect, we can find a relationship between the Z-axis readings and the Blob Y readings.
This graph shows a logarithmic relationship between the Z-axis readings and the Blob Y readings. Since the dancer’s height is constant in the real world, we can adjust this equation to estimate the dancer’s vertical height when she is standing on the boxes.
The above graph displays our analysis of the given data from the Kinect. Here, the steps taken on each box is quite evident. The logarithmic equation was then integrated into the Processing sketch.