For the final demo, I generated a page of results in the same Processing sketch. Granted, they don't have the results page UI around it, so what I just did for now (while waiting for Processing JS to implement 3D) was inserting .gifs of the skeletons + bird's eye view manually into the pages.
Walk Results Page
Jump Results Page
Random Results Page
Each page has 8 results displayed. The skeletons look quite small now (they unfortunately got so blown out by the projector for the final demo that you couldn't see any of them), but hopefully I'll finish the implementation where the joints change color based on changes in acceleration. Also, it would be nice to have some indication of how long the motion is, because I'm sure information would be useful.
For the bird's eye view, there are two small indicators (one green and one red) to indicate where the motion starts and finishes. For some of the motions where the subject stays mostly in the same place, you can't really see the green/red lines nor the yellow dot because the information gets drawn on top of each other and the space the subject occupies within the capture space is small. For the case where the subject stays in one place, I thought about enlarging the "dot" that results, but then you wouldn't be able to distinguish that from a subject performing a motion where it occupies a circle of space.
In order to deploy my code, ideally I'd like to wait for Processing JS to finish its implementation of 3D. That way, there can be multiple HTML5 canvases on the same page that load pretty quickly, instead of embedding Java plugins, which is what you have to do with current Processing sketches. Processing JS just makes executing Processing code in the browser a lot easier/faster, because it runs on HTML5 and webGL.