This sketch shows a bird's eye view of the capture space, with the subject's starting and ending position identified through colors/icons/whatever. There's a line in varying widths tracing the subject's movements throughout the space: the thicker the line, the longer the person remained in that position.
This sketch is a combination of one of my previous sketches and this youtube movie (Autonomous Motion Synthesis through Machine Learning) that Joe showed me yesterday. Pretty much everything is indicated in the sketch. However, this might be too large of a view to include in search results - maybe it's more suitable for an advanced user? It also allows you to view certain joints or tun off certain joints.
This is just a brief sketch of a possible UI for search results: at the moment, it's really bulky. Hopefully I can condense it down. Right now, the pieces of information I'm including in each result are:
- image of one frame
- framerate
- description
- tags
- available files for download
- visualization
- view more visualizations link (for advanced users) - this may slide down an additional panel, or it might redirect to a new page for just that search result?
Since the alpha review is next Friday (!!!), the things I have to accomplish before then are:
- more sketches!!
- more UI sketches
- make my alpha review video/presentation
- CRAM-type evaluation thingy
3 comments:
I really like the top-down view idea and how you'd use thickness to denote time spent in certain areas, although I guess the usefulness of this sort of view in general will depend on the specific application (e.g. might be really useful for getting the sense of a dance motion, but not so much for a sign language monologue). It might be nice to have this view as an auxiliary thing though.
For the second sketch, I think your best bet is probably keeping the visualizations as simple as possible, and that sort of visualization might be a bit hard to understand globally what's going on at first glance. However, having the scrubber that shows the skeleton at a specific frame like in your first sketch from last week might be a good complement to this sort of setup.
I like the idea of the first sketch a lot. But it may not be adaptable to various types of environments. For instance, if this UI were used at another school... or someone used the red cameras vs. the other cameras, the size and shape of that area changes. Would that be reflected?
Jeremy: Yeah, it would be an opt-in view that would only be applicable if the subject were to move around a lot. However, in cases where you're not sure if the subject moves or not, this would be a way for you to tell that. I'll bring back the scrubber for the second sketch, thanks!!
Marissa: Yup, the size and shape of the visualization would depend on the capture area of the camera system used.
Post a Comment