So I just recently finished up a micro site for Sport Chalet, a California/West Coast based sporting goods boutique. Sport Chalet came to Firstborn wanting to promote their winter line of snowboarding and skiing equipment and apparel. To help, they also got a bunch of professional snowboarders and skiers to help promote the site by modeling with equipment and by giving interviews. Our goal was to figure out a way to display all these professionals as well as just models in an interesting way. The concept we arrived at was to create a 3D mountain and populate the face with hot spots that would represent one of the sixty or so people. After selecting a person, the user is taken to a ‘rider page’ that has motion tracked video with target points that describe what that person is wearing.
The first challenge was to create the mountain in 3D in Flash. Originally we were going to create the mountain based off of a grayscale depth image, where the height of the 3D mesh would correlate to the brightness value. However, this did not offer us as much controll over the final result of the mountain. Instead, we decided to model the mountain in an external editor and bring it into papervision. Our first inclintion was to use collada or md2 format. However, I was unsure of how much accessesability I would have to the mesh, as we would need to make it interactive. In fact, since each triangle would be it’s own object it could not be appart of one mesh, so rather the mountain would be a collection of independent objects. Additionally, the art director was using Cinema 4D which at the time did not support collada export. So it seemd to me that the only option left was to make my own importer from Cinema 4D. The importer I created uses VRML files (.wrl) which are ascii text based files that contain an array of point and faces, as well as UV coordinates for texturing. It’s actually a simple process to rebuild all mesh, except for a few tricks such inverting the Z-axis (papervision’s is opposite to Cinema’s) and reversing the normals since they all come in backward. Since I wrote my own importer, I knew exactly what was happening, and thus had totally controll over how the polygons are created and handled, therefor making the interaction process that much easier. Here are a few steps/examples of creating the mountain:
Example 001: Bringing the object in directly from Cinema (the color is from its original material) and showing the wireframe as well.
Example 002: An example of adding a bitmap texture to the object and have it retain proper UV coordinates.
Example 003: Adding a color gradient based off of each triangle’s centroid position relative to the entire mesh, as well as cropping polygons on the lower portion to remove the rectangular base
Example 004: Adding some effects such as Flat Shading (which had a bug that I needed to fix first) and subtle glow.
Another interesting process was creating the tracked videos. The site contains 78 tracked videos, which for each we needed to do the keying, color correction, and finally motion track each one. After each video was tracked with one or several trackers, we simply copied the keyframes right from after effects and ran them though a custom parser we built to convert them into XML. You can see the result on the site or in this demo.
So please check the site out, hopefully you’ll enjoy it, as it was pretty fun to make. There’s a contest you can enter as well, but you have to be able to get to one of the Sport Chalet stores, and they’re only located around Southern California / west coast area.
I developed a way to export an object from Cinema4D and bring it into Flash with FIVe3D. Since I am using FIVe3D, the object comes in as vector, and not as bitmap. Currently I haven’t done anything with texture, but I would like to try and preserve color data for each polygon. I figure, if you want a bitmap image instead on a complex polygon model, you might as well just use PaperVision3D.
The export method was the trickiest part, or rather, just finding which format works bes. It seems that VRML will give you all the data you need and formated so that I could understand it. The two important exports are an array of 3D points which look like: [ 0 6.794 0, 13.385 6.794 0, 42.833 6.794 0, 72.281 6.794 0, ] for however many points you have, then another array of face sets that looks like [ 0,1,42,-1,1,2,43,-1,]. The face set is instructions on how to connect the points. So in the previous example, the first polygon (which is a triangle, so you’d need to triangulate before export) is starts at the first point (0) then connects to the 2nd point (1) and finally the 42nd point (42). The negative one must be for another use, so currently I ignore it when I parse.
Anyways, here’s just a simple object I exported and took into Flash: Bottle
Adobe has a press conference about CS4 the other week, including (of course) the newest version of Flash. Some of the exciting new features are the IK tool and native 3D. I’ve been playing around with the beta version for some time now, and I’m sorry to say I’m a little disappointed. However, there are things that I do like about it. The first is the interface, which looks more like After Effects. It will take a little time to get used to, but that’s no biggie and I don’t spend that much time in the Flash IDE anyways. The second nice feature is the new way of applying tweens. You can tween motion, rotation, scale, filters, etc all independently (and actually control, not just apply easing independently, as you could in previous versions). But even better is that you can scale the timeline and the tweens, so if a tween is too short, you can stretch it out, and it applies to keyframes in between as well – very useful. You also get path guides when you apply tweens like you do in After Effects, so applying curves is easy.
Now about the IK tool. Yeah, it’s cool – but that’s it. I’ve had to make puppets in Flash before, and it’s a bitch, so it is nice to have this feature, but I’ve only had to make puppets once. It’s not like every day I am making IK people. And now that it’s available, I have a feeling we’ll see a lot of puppets just for the heck of it. I think the true potential of it will being used in an abstract way, such as in a way to link elements together and snake around. Adobe themselves said it best back at FITC: Just because we have the IK tool doesn’t mean you need to make everything into a puppet. Struck recently released a project for Mario Super Sluggers where they used the Puppet Tool in AE to add little animations to the stars which is a nice touch. This sort of thing is where I hope the IK/Puppet tool will be used, for adding that extra little detail, not for just creating puppets of people. (You can read a little more about the site on Jonathan Minori’s blog, who was the Art Director on the project.) I guess it is more useful to the animation side of Flash as opposed to development.
The 3D element seems like a nice idea. For simple stuff like parallax or just doing some simple transitions, it’s a pretty nice idea. I guess if you are doing bitmap transitions, then it’s probably beneficial. The drawbacks seem that there are not many customizable features (I could be missing them) such as the perspective. If you’re doing simple vector shapes in 3D, it’s still better to use FIVe3D, which is the 3D tool I am most comfortable with. I did a few test comparing the two and it still runs faster. Of course, as I’ve mentioned in earlier post, FIVe3D is specialized for vector content, so using it to handle bitmaps is not it’s original intention (although it still works great). Here are the test, and I just want to make it clear: I’m not here to say “FIVe3D is great and Player 10 can suck it” – just that each one has it’s own strength and weaknesses, and that you use each one when necessary.
Multiple Objects: Drawing a bunch of squares and then simply rotating them on an EnterFrame. FIVeD keeps them as vectors (the outline is always the same) and flash must translate them into a bitmap, as you can see the outline on those squares getting lost as each sprite rotates.
Large Object: Drawing one very large object, and then pushing it back in z-space and rotating it on an EnterFrame. Since FIVe3D is vector based, it’s not a problem, but player 10 seems to struggle with it, as it must be handling it as a giant bitmap.
I guess what I am upset about is that there were some issues I wish Adobe had worked on to improve rather than creating new ones. Did we really need 3D native inFlash? No, because there are countless other engines out there such as PaperVision, Away3D, Five3D, Sandy3D, and others. The IK tool? Yeah, a nice feature that will come in handy and be worth it’s weight in gold when the time comes to make another puppet, but there are other things that I could use more. Personally, I’d like to see better audio control, which unfortunately means a pretty big overhaul of the entire player. I’d also like to see a way to remove all listeners from a given EventDispatcher, and I’m not quite sure why that hasn’t been implemented yet. (For all I know, there might be a very good reason.) It was easy enough to write my own class that manages the adding and removing of event listeners, but the downfall there is that you’re using a 3rd party to mange your events, so it’s just seems more cumbersome and doesn’t really seem worth it.
In closing, just so it’s not all a rant, I would like to congratulate Adobe on their upcoming release. I’ve been tinkering around in Flash for 5 or so years now (since Flash 5 back in 2002/3 when I was introduced to it in High School) and I’ve been excited every time Macromedia/Adobe launched a new version, as it is always exciting to see the product grow and add new features. Photoshop has some nice little features too, the content-aware scaling seems pretty cool and is fun to play with. It has a little feature on the color picker too that is “copy color as html” which is nice since it just adds it to your clipboard. Except it adds it as “color=#FFFFFF” and as far as I can tell you can’t edit that anywhere to make it so it would be ready for Flash (so instead it would be “0xFFFFFF”).
Note: Seems like when witting a post in Google Chrome, line breaks don’t save for some reason. You also loose the instant spell check that Firefox has.
For their 75th anniversary, Esquire printed 100,000 special editions of their October 2008 issue with ‘e-ink.’ Basically, the added a video screen into the cover of the magazine. When I first heard about this, I was immediately intrigue. I searched NYC for a copy, and didn’t find one until a friend told me he saw some in Grand Central. The quality of the screen was better than I expected, however, the term e-ink is a little misleading. I thought from the sound of it that it would be very thin and flexible with a small circuit board. The cover is rather thick, not very bendable, and the circuit board is pretty evident. The screen is also only black and white, with a color transparency overlay. So this isn’t like e-paper or any of the concepts of inexpensive paper-thin and flexible screens, but it’s a step in that direction. It’ll be interesting to see what develops next, if anything at all.
I want to start posting some of the little experiments I’ve been working on at work. I did an experiment trying to optimize depth of field rendering in FIVe3D. The way you can have 500 objects in 3D space with depth of field blur on all of them and still have decent performance is to use smoke and mirrors. Basically, the objects are not being blurred. Instead, before the program runs, it takes the Sprite or Bitmap that is to display, and draws it and saves it as a bitmap. Then, it applies a blur to it and redraws it, and saves it as another bitmap and stores it away, then blurs a little more and so on. So, at the end, there’s an array with say 20 or so images with different states of blur on them. Then, based on the perspective of the objects, I find out their distance from the camera, and generated a value of how much to blur the object. But instead of blurring it, I just swap out the current bitmap with the applicable pre-blurred image. If you want to see some really cool depth of field stuff, check out some of the experiments by Mr. Doob.
click and drag to rotate, mouse wheel zooms in/out
So I just finished my first project at Firstborn Multimedia – XM Wild Ride and is more of a game than a traditional site. The user is driving along in their new XM radio enabled car and you’ll drive though four cities based off of the many different genres XM satellite radio offers. During open road areas in between cities you can take exits to two minigames and a theater section, which unfortuanly only has one video as of now. The site uses the same engine that was used for the Zune Journey that Firstborn did last year.
My role on the project was development. I programed the ‘Help the Hippo’ game which is basically Frogger with a hippo. I also did the main drive experience, which involved taking the preexisting Zune scroller engine Mathieu created and repurposing and almost completely overhauling it. This was the first thing I was assigned to pretty much from day one. I started by doing the game, but after finishing up the alpha pretty quickly I was switched to doing the drive. All the enviroment illustrations were provided to us but I arranged them for the enviroments. Mathieu Badimon was the lead developer and oversaw the entire development, as well as programmed the site shell and structure. Izaias Cavalcanti did the ‘Flip the Animals’ game and Max Holdaway did the theater section. Joon Park modeled the characters and fellow RIT New Media Designers Mike Kuzmich and Eric Eng did the animations. Will Russell and Wes Adams were the producers for the project. We also had some help with the animations and touch up work by some of our interns.
By the way, we hid a bunch of Easter eggs, which are invisible targets on select buildings. Here’s an example freebie – if you click mouth of the guy on the first bus stop, you’ll get a mini-map showing the position of the other cars that I used for debugging. (Also, while developing the project I discovered the site syncs well with the new album from The Dandy Warhols, specifically Love Song and Mis Amigos.)
Here’s two more experiments I recently did with FIVe3D. This first one here is taking the image depth analyzer and is extending it to generate a 3D mesh instead. (I added the option to turn off the stereo rendering help performance.) The next test was something I did for a potential upcoming project. I’ve been meaning to get back to my “Flash Roots” (no pun intended on the ‘root’) and visit my first Flash project ever – a game called Super Josh. One way or the other I am going to start working on a new version, which would technically be the fifth game even though most of them never got very far. Anyways, here’s the Super Josh FIVe test. Basically, I can use FIVe3D to render the environment and use pre-rendered sequences from Cinema 4D for more complex geometry such as characters. The idea for this is based upon many games that have done this before, the one that stands out the most to me is Mario Kart 64.
Speaking of 3D, here’s another test I did back in college using the BitmapTransform class with tracker points exported as XML from After Effects. The BitmapTransform class just deforms an image based on four target points.