Monday, May 9, 2011

Some more changes

1. I changed the sampling rate of the frame-by-frame view so that the intervals now depend on how long the motion is.

Here's a walk motion that has 500+ frames:


And here's a walk motion that has 200-300 frames:



2. I changed elements of the interface so now it explicitly says how many frames are contained within a motion, and I also commented out the "Additional Views" link for now.

3. Here's the final presentation video + voiceover (as it sounded during the actual presentation, but this time the graphics will NOT be blown out by the projector).



4. And here's the final report (contains details), if you want to read it: Final Report

Thursday, May 5, 2011

Joint color changes

I implemented a new visualization where the joint colors change based on acceleration. Red means the joint is speeding up, and blue means the joint is slowing down.  There are 3 versions below:

This one displays the bones changing color:

This one displays the joints changing color:

And this one displays the joints changing color and the bones changing color based on the average color of the joint + its parent:


(keep in mind that the skeletons are displayed every 25 frames right now...)

Tuesday, May 3, 2011

For the final demo

For the final demo, I generated a page of results in the same Processing sketch. Granted, they don't have the results page UI around it, so what I just did for now (while waiting for Processing JS to implement 3D) was inserting .gifs of the skeletons + bird's eye view manually into the pages.

Walk Results Page
Jump Results Page
Random Results Page

Each page has 8 results displayed. The skeletons look quite small now (they unfortunately got so blown out by the projector for the final demo that you couldn't see any of them), but hopefully I'll finish the implementation where the joints change color based on changes in acceleration. Also, it would be nice to have some indication of how long the motion is, because I'm sure information would be useful.

For the bird's eye view, there are two small indicators (one green and one red) to indicate where the motion starts and finishes. For some of the motions where the subject stays mostly in the same place, you can't really see the green/red lines nor the yellow dot because the information gets drawn on top of each other and the space the subject occupies within the capture space is small. For the case where the subject stays in one place, I thought about enlarging the "dot" that results, but then you wouldn't be able to distinguish that from a subject performing a motion where it occupies a circle of space.

In order to deploy my code, ideally I'd like to wait for Processing JS to finish its implementation of 3D. That way, there can be multiple HTML5 canvases on the same page that load pretty quickly, instead of embedding Java plugins, which is what you have to do with current Processing sketches. Processing JS just makes executing Processing code in the browser a lot easier/faster, because it runs on HTML5 and webGL.

Sunday, April 24, 2011

He's walking, kind of sort of?

The elbow joints look a bit off, so I'll try to see what's up with that. However, I think the rest of his body looks about right? I'm using this animation (frame 1) as the source.


Basically, rotations are confusing. With much help from a certain someone, I revised the draw method so that it applies the rotation based on the axis in the ASF file, then applies the rotation specified in the AMC file, and then undos the axis rotation. The result is the above screenshot, and it looks a lot closer to the final product than anything I've gotten so far, so I'm keeping it for now.

Thursday, April 21, 2011

Sup, skeleton?

Jon McCaffrey is the best. No, seriously. He helped me walk through and make sense of what I was doing wrong and what I was not sure of. I gave him a huge hug and then tried to pick him up, but that was quite difficult as I'm not strong enough and he is like 3x my size. So that failed.

But here, I think this looks ok? Needs stylizing and whatnot, but at least things look like they're in the right place!

And because I know you all like "oops" shots, this was a screenshot taken a few minutes before we sorted out mr. skeleton:

Wednesday, April 20, 2011

Oopsies

More debugging to do, but I think these look kind of cool. This one reminds me of an alien:

And this one reminds me of this graphic by the NYT, mostly because all the lines are straight and pointing along the same axis:

Monday, April 18, 2011

Not quite a skeleton

But something displays. It has something to do with the order with which Processing processes the translations/rotations, and whatnot. The AMC parser works fine at the moment, but I think I just need to debug the recursive display() function I wrote. The challenge is trying to make sure all the functions related to drawing/translating/rotating execute in the proper order and in the correct coordinate system relative to each other, which is getting messed up somewhere currently.

Current train of thought:

The idea is that you want to keep drawing a chain of joints within the same coordinate system as the stem of the chain. Therefore, you need to push/pop matrices so that once you run out of joints in that lineage, you go back to the common ancestor and pop matrix so you start drawing again from that coordinate system. So if you start with the root, you want the left arm to get drawn in succession, but then you can't exactly start drawing the right arm without reverting back to the coordinate system of the root. Thus you need to push matrix right after you draw the root and then pop it when you're about to start drawing the right arm. There's a code sample on page 112 of the book Getting Started With Processing (Casey Reas & Ben Fry) that makes a lot of sense in terms of how the transformation hierarchy works. I think I'm just hitting a block in articulating my thoughts but it makes sense to me.


Will debug today.

Also, I wrote a Frame class that holds a skeleton and a frame ID. I'm trying to make it so that the parseAMC() function creates a new Frame object for ever frame it reads, but right now, it seems as though the program overwrites the same Frame, even though I used a similar code structure for creating new Bones in the parseASF() function and that works fine. This would be where a better IDE (allowing for better debugging functionality) is helpful. That being said, debugging has been extremely slow with the Processing IDE so far. I didn't think it would be as slow as it has been.

I've also been sick for the past week and a half. It's been such that I've had a recurring headache this entire time. Not the worst, but certainly annoying when you have to stare at a computer screen all day. :(

Saturday, April 9, 2011

OMG I HAVE SPHERES

Well, kind of sort of. Last night, I realized that ProcessingJS actually doesn't support 3D yet, which is a major pain in the ass. I'm going to code everything in Processing for now, and when ProcessingJS does support everything 3D (which should be soon...), it's a simple task to port over the code to ProcessingJS. Like, copy-and-paste simple.

The smaller sphere is supposed to be the root (I just set radius = 10 for now) with a translation on it, but I'm going to do some debugging now and fleshing out the AMC code. At least I have something.

Thursday, April 7, 2011

Beta Review

So on Tuesday, I had my beta review with Norm, Joe, and David. Essentially, for the past week, I had been working on parsing the asf/amc files using Processing. However, as I described in the meeting, ProcessingJS unfortunately does not have a BufferedReader implemented, meaning that I had to treat the entire file as an array where each line in the file comprises an entry in the array. Barf.

So I think I'm pretty much fine with regards to the asf/bone/skeleton structure, but the amc file was a little confusing because I wasn't sure what the best way to interpret that was. Thankfully Joe cleared it up, and I shall get to working on that, including displaying just one joint/root and hardcoding translations/rotations to make sure the system works.

After that, I can replicate the skeleton. Whee.

Friday, April 1, 2011

Got a rough UI up...

So I got a rough UI for the search results page up. It pretty much adheres to my sketch a few posts back, but now it's actually marked up and stuff in an adaptable way, I hope! The page is here (currently has dummy content and doesn't do anything, but for the purposes of the simple user study, it'll do), and a screenshot of it is below:

After playing around some more with webGL, Lu alerted me to the existence of ProcessingJS, which enables you to use Processing to make visualizations for the web without using any plug-ins. Hooray! With ProcessingJS, I can either code in Java or Javascript, and it'll work on a webpage. After testing out a few examples, I've determined that this more what I want to work with, so I got the OK from Joe to switch over to Processing.

Before the beta review, I'm going to try to get a skeleton up and running.

Thursday, March 24, 2011

Self-Evaluation

So this past week I tried working on getting a skeleton to display using webGL. Unfortunately, I wasn't entirely successful. I spent most of the time looking at sample webGL code, looking at the asf/amc parser code Joe sent, and figuring out how I would go about even doing something like this. Lu clued me into something called o3d that builds on webGL, allowing developers to create interactive 3D applications. Looks like something I can use, so I was trying to get a sample o3d program to display. Due to stupid mistakes on my part, it took a lot longer than it should. But you can see it here, in case you're wondering.

I met with David today to go over the sketches I had done. He suggested that I don't abstract the data as much, which is really valuable and helpful advice. I agree with him, as some of them (particularly this one and this one) are hard to read and interpret at first glance. They involve kind of a significant learning curve, which I don't think is the best thing to display in search results. I'm not ruling them out though – they may be more appropriate for an advanced view.

Self-evaluation:
Honestly, I have a lot left to do. By the beta review, I'd like to have a skeleton up and running. After that, I need to mock up a visualization and do a quick and small user study before the demo day. For that study, the "moviz" webpage will be hard coded and won't actually interact with any databases. This page shouldn't be hard to do.

I feel like I could be doing more, and I don't know if it's because I'm a relatively slow coder/slow to pick up new programming languages or because it's second semester senior year and I'm mentally exhausted, but I'm not happy with the schedule I'm on/my progress so far. I expected to code, but I didn't expect to do this much with gl, and I wasn't entirely prepared for that given the last time I touched openGL was a year and a half ago. I went into this project hoping to work with Processing, but things changed and I need to adapt to that. Hopefully I'll get everything I want done in time.

Also, my computer severely lags whenever I have a webGL page running. It's really annoying.

Thursday, March 17, 2011

Playing Around with WebGL

This week, I read up on WebGL and tried to play around with it. Originally, I was planning on doing a "Hello World" example with some squares, but then I realized that I actually don't know how to render text in the window frame. Don't remember doing it in 277 or 460. :( Anyhow, a quick Google search didn't help much, because most results ended up involving textures (is there an easier way to render text in the actual drawing window?).

I then realized that it had been a year and a half since I had done anything hands-on with openGL, and thus needed a refresher. What I ended up doing was following a tutorial on Learning WebGL and getting a blue square to display. Then I got two blue squares to display. This took a lot longer than expected because I wanted to type up the sample code myself and figure out how it works. I think I got it for the most part... Check out the super exciting screenshot below.
webGL is based on Javascript, for the most part (except for the shaders), so the syntax is different from what I'm used to programming graphics in. I like that it's a web-based format though, and I can also use HTML5. 

Plans for next week:
  • get text to display
  • get a skeleton to display (this involves being familiar with reading/parsing files in webGL)

Saturday, March 5, 2011

I have a TEDache

In the best way possible, of course.

I realize I'm really late in posting this, but this past week, I've had the immense pleasure and fortunate opportunity to attend TEDActive, the simulcast of TED 2011 in Palm Springs, CA. While there, I barely had time to pee, let alone write a blog post and do senior project work. The experience was incredible, and I would gladly do it all over again. I met some amazing people, all of whom had accomplished extraordinary things. I felt so out of place at first, but the beauty of this conference lies in the people and that they all believed I had something valuable to contribute. I'm still trying to figure out what I contributed, but it's nice to know they found value in talking to me. I face a while of battling post-TED depression, and I fully intend on trying my best to become a TEDster and uphold the brand through future TEDx events I plan to organize. I apologize terribly to Joe, Norm, David, and Amy (and anyone else who may care...) for my lack of posting. I'll make up for it as soon as I can.

Meanwhile, I'm sitting in the Palm Springs airport, writing this post on my iPhone because the wifi doesn't work on my Mac :( I forgot how tedious it can be to write a massive amount of text from the phone.

To describe the feedback I got from my alpha review, here is a bulleted list for your reading convenience:
- presentation: too many slides! Go slower next time.
- User study: is it even necessary? The CMU example is terrible.
- more sketches allowing for comparison
- need to start coding! WebGL is good
- try sketches that show figure of person - allows for blending purposes

Thursday, February 24, 2011

'Twas the Night Before the Alpha Review...

...And all through the lab, I am still working.

This past week has been me in a ball of stress because everything is happening this week. Projects are due, quizzes to be taken, and a review to go through. I think I finished all my slides for tomorrow, but I just need to make them into a movie and time all the slides.


I did two more sketches though.

 In this sketch, footprints are displayed along the path of motion (a thick line). This doesn't present the entire view of the motion, unfortunately.

This sketch is meant to be a proximity chart for joints (got the idea from E.J. Marey's graphic on train schedules). I'm not sure if this is the easiest to comprehend or if it's even a good visualization, but the idea is to display when joints get closer to each other. So in this example, I tried to depict a running cycle, so from from 0 to 10 (these numbers are arbitrary), the left arm and left leg move closer to each other, and the right arm and left leg move closer to each other. I think my graph isn't perfect (there are a few mistakes and some stuff can be clarified), but that's why this is just a sketch.

I will blog next week about comments from the review, and I'll try to revise the sketches and incorporate feedback, but it might be a bit hard for me to find time to do that since I'll be in Palm Springs attending TEDActive 2011. Commence pee-in-my-pants-excitement-dance. Geez, I'm so nervous.

Thursday, February 17, 2011

Yippee, more sketches


This sketch shows a bird's eye view of the capture space, with the subject's starting and ending position identified through colors/icons/whatever. There's a line in varying widths tracing the subject's movements throughout the space: the thicker the line, the longer the person remained in that position.


This sketch is a combination of one of my previous sketches and this youtube movie (Autonomous Motion Synthesis through Machine Learning) that Joe showed me yesterday. Pretty much everything is indicated in the sketch. However, this might be too large of a view to include in search results - maybe it's more suitable for an advanced user? It also allows you to view certain joints or tun off certain joints.


This is just a brief sketch of a possible UI for search results: at the moment, it's really bulky. Hopefully I can condense it down. Right now, the pieces of information I'm including in each result are:

  • image of one frame
  • framerate
  • description
  • tags
  • available files for download
  • visualization
  • view more visualizations link (for advanced users) - this may slide down an additional panel, or it might redirect to a new page for just that search result?


Since the alpha review is next Friday (!!!), the things I have to accomplish before then are:

  • more sketches!!
  • more UI sketches
  • make my alpha review video/presentation
  • CRAM-type evaluation thingy

Friday, February 11, 2011

Sketches

So this past week, I did some sketches for possible visualizations. Some of them might be a little out there, and some others might be more feasible.

Sketch 1:

The idea for this visualization is that you have a skeleton shown at Frame 1, and its major joints are labeled with dots. So in order to prevent showing many skeletons at various frames, you instead plot the joint position at a frame and connect it to the joint position at the previous frame to create a kind of continuous line of motion. This hopefully makes it easier to see which body parts move the most, and which stay stagnant. Of course, my drawing is super bad and I did it by random, so we'll actually have to load a motion into it to see how it looks. Another idea was that if you were to hover over any part of the group of lines, you could see what the skeleton looks like at that particular frame. Perhaps, if it scales well, you can also choose to view it from the front/back, which could present a very different view.

Sketch 2:

This sketch is mainly for the skeleton (ie, what to display at a given frame). The idea was to place arrows indicating direction of movement at each of the major joints, and the direction would be determined by the difference in position between this frame and the next frame. Problem: the arrows might appear visually cluttered though, so they would have to be thin and lightly colored. There are various ways to display the arrows (length, shape, opacity, width) so that they don't appear as intrusive. Maybe the skeleton could take a back seat and let the arrows be the focus.

Sketch 3:

This is just your typical frame-by-frame display of motion. Standard.

Sketch 4:

This concerns the skeleton as well. Maybe for each frame, we can highlight the parts of the body that see the most motion, or the part of the body whose position changes from the previous frame to this frame. The highlights could also be on a spectrum: lighter color = less movement, darker color = more movement. These skeletons can be displayed side by side (like in Sketch 3).

Sketch 5:

Now this one is more of an overlay of skeletons at each frame (or intervals of frames). Each skeleton would be translucent so you can see which body parts stay in the same place (indicated by a darker color), or which body parts move around more (indicated by lighter colors). The idea is to make this look like a slowed down motion clip, where you see traces of the frames before. Onion skinning, is that the term?

Sketch 6:

This is another way of displaying the skeleton per frame. Instead of seeing the skeleton as a whole, maybe someone would like to focus on a specific body part, say the legs. For each frame displayed, we could either show just the legs (resulting in a really weird torso-less body), or we can show the whole skeleton with with the rest of the body in a light color, and the legs in a dark color. This allows you to focus on the body part that concerns you. For example, if you're searching for motion capture data of an Irish dancer, then you probably would want to see movements of the legs and not the upper body, since the upper body stays still most of the time anyways.

So for next week, my plans are to:

  • Iterate on these sketches
  • Start sketching the website (results page) layout
  • Look at the asf/amc parser code that Joe will hopefully send me!

Thursday, February 3, 2011

More Ideas

So unfortunately, this week has been a little slow for me due to various reasons, and I haven't worked on my project as much as I had hoped. One interesting idea (among several) that surfaced during a meeting with Orkan Telhan, David, and Joe on Tuesday was the idea of stenography, and how it may apply to visualizations. Stenography, if you're not aware, is the process of writing in shorthand. So the question is, is there a shorthand way of displaying motion? I suppose this is the question I'm trying to answer, but phrased in a different way.

Joe was also kind enough to email several major animation/effects studios to ask them how they organize their motion capture data, and how they pull out specific shots. So far, only Sony has responded, and they pretty much said nothing of relevance to my project :/

Basically, next week I plan on doing the following:

  • Doing more research
  • Sketching more motion visualization ideas (including things that may be "out there")
  • Sketching a preliminary search results UI
  • Deciding on what technology to use for coding the visualizations
  • figure out how to use the code that the lab already has for parsing asf/amc files.

I also set up webspace on my own domain for temporarily hosting everything. The URL is movis.yiyizhou.com, so you can go there if you want, but all it says is "hello" at the moment. Not very exciting.

Wednesday, January 26, 2011

Background Research

I just got sent this interesting research paper, "TotalRecall: Visualization and Semi-Automatic Annotation of Very Large Audio-Visual Corpora." It's by the folks over at the MIT Media Lab. The reason why I thought the paper was interesting was not for the TotalRecall technology itself, but for one of the images within the paper.


After seeing this, I started thinking about how we can possibly visualize actors moving inside the motion capture area, since most motions aren't captured with the actor standing in one spot only. At this point, I still don't know if this visualization is entirely necessary, but it's something worthwhile to think about. Since parts of the image in the MIT paper looked like motion trails of some sort, that somehow led me to think about heat maps, which consequently led me to this Tracking Taxi Flow Across the City by the NYTimes Graphics Department. I thought that maybe we can apply the concept of heat maps to visualizing a short period of motion across a space.



However, we should still be able to see the actual motions themselves in some way. This paper showcases an interesting image, where you don't see frames at measured intervals, but you see frames when they are of importance in the motion.


What if we were to add a smooth curve connecting the root of the skeleton in each frame? This could showcase the change in y-position throughout the motion, which can be useful.

My sketches are below. Apologies if you can't read my handwriting. I would appreciate any thoughts/comments on these! They are very rudimentary as of now. Also, you'll notice the monitor behind the sheet of paper. I took the picture with my iPhone.

Saturday, January 15, 2011

Abstract

A big issue facing technology today is how to make sorting through search results faster and more efficient. The computer graphics industry is not devoid of said issue, particularly when it comes to finding appropriate motion capture data for projects. Since acquiring and maintaining a good motion capture system is both cumbersome and expensive, the need to have accessible and relevant motion data  is prevalent. However, searching for motions is not an easy task. Currently, the leading motion data provider, Carnegie Mellon University, has a system where users search based on tags. However, the tags can be misleading, in that one person's perception of "walk" may be very different than someone else's, and results returned to you are listed in a text-only format. Thus, how do you improve the results sorting process to easily find the motion you want? How would you interpret and display the 3D data so that users can efficiently analyze and compare search results without having to download and open movie files? I hope to use my knowledge of data visualization to come up with a better and more visual solution to this problem, supporting the final solution with data obtained from user studies.

[Edit: January 21]
In this past week, I met with Joe and David Comberg (who is going to be one of my advisors, focusing on design and visualization). He suggested I meet with a new FNAR professor, Orkan Telhan, who specializes in interdisciplinary art and design. I'm quite keen to get his thoughts.

I also found a paper on IEEE titled "A Study on Motion Visualization System Using Motion Capture Data," and the abstract sounded exactly like what I was looking for. However, the actual paper was not so relevant. Boo.

For the week of January 24, I plan on doing more research, reading more papers, and finding more examples of visualizations that relate to this project. Maybe I'll actually find a decent reference editor for the Mac. And maybe I'll also get to play with the brand new shiny Mac Pro that is going to reside in the SIG Lab (thanks Amy!). You have no idea how excited I am about this. OS X is what I primarily use at home, and to be able to work on this project in the same developing environment in the SIG Lab will be tremendously convenient and helpful.

[Edit: January 23]
If you want to read my proposal, you can find it here