3D Engine Update 3: Shading improvements

So over the last week or so, I restructured my 3D engine. Nothing that would be visibly noticeable, but I feel better about where it is at right now. What I like about it, though, is that it is still not what would be probably considered “the right way” to do it. The reason I am happy about this is that it makes it my own, and I understand exactly why and how it works. Once I have a really solid understanding, then I can move towards doing it the “correct way,” but for now, my little hacked together experiment makes me happy.

One of the changes I did make is to the shaders. I added a PseudoGouraudShader, which uses gradients to draw each polygon face to try and smooth out the hard lines created by using a flat shader. I doubt this method would ever produce 100% accurate results that are identical to a gouraud shader, but it makes for interesting results. The method I am using is pretty slow, as it needs to draw each face four times, three of them being gradient fills. Each vertex’s normal is calculated from the sum of all the faces it is a part of, and that brightness value is used to draw a radial gradient on that face, the center point being the vertex. (Two examples: RGB example and dot example, the circle areas are ‘painted’ on via gradient fill) This can produce some fun effects too when played with.

I had an example of the PseudoGouraudShader last week, but it was off, it looked like alternating faces had different brightness values. Just playing around last night I found the issue, it lies with how I was calculating my normal.  When getting the cross product of the vectors, it is better to normalize the result instead of normalizing the vector before. The example below shows the two results and code snippets:

normalized-result

Still not perfect, but I’m not sure how close I can get using this method. I think there might still be some slight errors in the core math, which could be producing the results that look similar to the image on the left (although less intense). You can check out the live demo here.

On a last note, calculating normals and centroid over and over again for rendering can get pretty intense. In the case like this, the same normal gets calculated multiple times a render even though it hasn’t changed (because each vertex needs to get the normal of each face it is connect to). In order to boost speed, I created a caching system. When the normal is calculated, it saves it to a “stale normal” and updates an object (which I called the cached face) that tracks the position of each vertex in the face. Then, the next time the normal is called, if that cached face is not different then the real one, it just uses the stale normal instead of re-calculating.

Using RGB Channels to Compress Multiple Grayscale Images

Another experiment recently did was compress three grayscale images into one file. The reason for this would be of course file size and load time. A real-case scenario where you might need to load 3 grayscale images of the same size is loading textures for a 3D app. For instance, your model might be color dynamically, but you need to load a diffuse map, a specular map, and a bump map – all of these would be the same dimension and would be black  and white.

So all you need to do is take your three grayscale images into photoshop and layer them on top of one another. Then use the gradient map image adjustment to have each image range from red, green, or blue to black. Have the base layer be flat black, and set each of the three layers to screen. Then, once that image is loaded into Flash, you can extract each color channel and set it back to black and white with the ColorMatrixFilter. You just need to set the matrix so that each of the R,G,B rows are [0,0,0,0,255] and the last row (the alpha) is [1,0,0,0,0] for red, [0,1,0,0,0] for green, or [0,0,1,0,0] for blue. The just smoosh that transparent white image into a black bitmap, and you’re set.

Again, not entirely useful for every day cases, and it does cut down on how easy and dynamic the loaded content is, but if you’re hurting due to file size and load time, this could be something to keep in mind. In case you’re wondering – the layered color image I used in my example is 188 KB, and the blue channel saved as its own grayscale file is 132 KB.

Text to Image Transcoding

Playing around in Flash making a ‘image encoder’ that converts text into image, or String into BitmapData. Basically, I wanted to convert each character into binary, and then represent it as an image. So the first thing I did was have each character assigned a number, and then convert that into binary. Since I wanted to control what characters could be encoded, I created an array of the characters I wanted (instead of using the charCode) and used the index of the character as it’s id. Each character is converted into a 7-digit binary value (this value was based off of the array length) and then each digit is written to bitmap data, 0 as black and 1 as white.

I tried the same thing, but except with a color image. Instead of having 7 pixels representing a character, I used one pixel per character, converting each charCode into a hex value. The result is that the image is all blue, as the common used character codes are all under 255. Not a very useful exercise, but it was still sort of fun, as was converting a number into binary “manually.” (It involves a lot of use of modulo. )

The image bellow is the first chapter of Lewis Carroll’s Alice in Wonderland encoded into an image. Note: I chose Alice in Wonderland because it is copyright free, not because of the new trailer for Tim Burton movie. There’s a larger example here with the color image as well.

alice-transcoded

3D Engine Creation pt 2: Lighting & Shaders

3d-engine-banner-part2

Quick update on my current experiment building a 3D engine in Flash – I’ve added light and shaders. Shaders are special materials that are effected by the direction each face is pointing (called the normal) and another vector, in this case, a light direction. During the render process, each face calculates its brightness thus giving the illusion that light is reflection off of the mesh.

Upon researching, I found on a few occasions that the way to calculate lighting is to simply add the normalized normal and normalized light vector. This, however, seems to be slightly incorrect applicable to this situation. First, the result is another Point3D, where instead I am looking for a brightness value. Upon consulting coworker Mathieu Badimon, he suggested using the dot product of the normal and light vector. This returns a single number value, and when used to calculate brightness seems to work perfectly.

Next step? Finally tackling bitmap distortion in 3D, allowing for bitmap materials. (Which opens the doors to different types of shades as well, such as bump maps and dot3 normal mapping.) Once I’ve tackled that, I plan to completely restructure the engine, as I have a better idea now of how it can work while still sticking to my original plan of excluding a view3D. In the process of restructuring, I plan to write up the core math equations used in the most simplified way that I can, as finding some of these equations in a non-language specific explanation or simplified manor can be difficult.

Check out the new demo here. Click and drag the white dot to move the light source.

Creating a 3D Engine in Flash

3d-engine-banner-part1

A few weeks ago I had to create a simple X-axis based parallax ‘engine’ for a project. Just basic horizontal movement, which is scaled based on an abstract z value. Well, somehow that turned into a full 3D engine. So I am continuing to build on what I had started in order to create my own Flash 3D engine, not because I think I can build a better Papervision, Away3D, or Five3D, (because I’m sure I can’t) but rather in order to learn. Since I end up using 3D engines fairly often now, I figure it’s good experience for me to build my own and see how it can work from the ground up.

Continue reading “Creating a 3D Engine in Flash”

Pixel Bender Filters

I started to play around with pixel bender a few weeks ago, but didn’t have much  time to really accomplish anything interesting. However, I came across the opportunity (or rather excuse) to use it to whip up a quick filter for my current project. Basically, we have a bunch of transparent png images of people or objects. In the images, the figures or objects cast shadows, but the shadow’s transparency is not taken into account (think of it more like a .gif where transparency is either true or false rather than smooth). So I wanted a way to take a selection of the image, and basically have it translate the brigtness value to alpha. The result would be similiar to using a multiply blend mode, if blend modes could be used with only one layer. Anyways, I couldn’t find a way in Photoshop to accomplish the effect to my liking, so I wrote a super simple pixel bender filter, imported it, and voilà, I had exactly what I needed. Pixel Bender is going to be a great too for player 10, but I think there could be a lot of interesting and practical uses for it even outside of Flash.

Here’s the filter (right/option click and save as, I guess Firefox and the like recognize it as a text file) if you’d like, as well as an image that hopefully clarifies what I was trying to describe above.

bender_example.jpg

Exporting Cinema4D Objects into Flash

I developed a way to export an object from Cinema4D and bring it into Flash with FIVe3D. Since I am using FIVe3D, the object comes in as vector, and not as bitmap. Currently I haven’t done anything with texture, but I would like to try and preserve color data for each polygon. I figure, if you want a bitmap image instead on a complex polygon model, you might as well just use PaperVision3D.

The export method was the trickiest part, or rather, just finding which format works bes. It seems that VRML will give you all the data you need and formated so that I could understand it. The two important exports are an array of 3D points which look like: [ 0 6.794 0, 13.385 6.794 0, 42.833 6.794 0, 72.281 6.794 0, ] for however many points you  have, then another array of face sets that looks like [ 0,1,42,-1,1,2,43,-1,]. The face set is instructions on how to connect the points. So in the previous example, the first polygon (which is a triangle, so you’d need to triangulate before export) is starts at the first point (0) then connects to the 2nd point (1) and finally the 42nd point (42). The negative one must be for another use, so currently I ignore it when I parse.

Anyways, here’s just a simple object I exported and took into Flash: Bottle

Continue reading “Exporting Cinema4D Objects into Flash”

Depth of Field test

I want to start posting some of the little experiments I’ve been working on at work. I did an experiment trying to optimize depth of field rendering in FIVe3D. The way you can have 500 objects in 3D space with depth of field blur on all of them and still have decent performance is to use smoke and mirrors. Basically, the objects are not being blurred. Instead, before the program runs, it takes the Sprite or Bitmap that is to display, and draws it and saves it as a bitmap. Then, it applies a blur to it and redraws it, and saves it as another bitmap and stores it away, then blurs a little more and so on.  So, at the end, there’s an array with say 20 or so images with different states of blur on them. Then, based on the perspective of the objects, I find out their distance from the camera, and generated a value of how much to blur the object. But instead of blurring it, I just swap out the current bitmap with the applicable pre-blurred image. If you want to see some really cool depth of field stuff, check out some of the experiments by Mr. Doob.

dof1.jpg

click and drag to rotate, mouse wheel zooms in/out

More FIVe3D experiments

Here’s two more experiments I recently did with FIVe3D. This first one here is taking the image depth analyzer and is extending it to generate a 3D mesh instead. (I added the option to turn off the stereo rendering help performance.) The next test was something I did for a potential upcoming project. I’ve been meaning to get back to my “Flash Roots” (no pun intended on the ‘root’) and visit my first Flash project ever – a game called Super Josh. One way or the other I am going to start working on a new version, which would technically be the fifth game even though most of them never got very far. Anyways, here’s the Super Josh FIVe test. Basically, I can use FIVe3D to render the environment and use pre-rendered sequences from Cinema 4D for more complex geometry such as characters. The idea for this is based upon many games that have done this before, the one that stands out the most to me is Mario Kart 64.

Speaking of 3D, here’s another test I did back in college using the BitmapTransform class with tracker points exported as XML from After Effects. The BitmapTransform class just deforms an image based on four target points.

Anaglyphic 3D Rendering in FIVe3D

 So since Mathieu Badimon, creator of FIVe3D, sits right behind me at work and we’ve been working on a project together, I figured I should give his 3D engine a try. In case you aren’t familiar with it, FIVe3D is a light, simple, 3D emulator for Flash. However, don’t confuse it with PaperVision. If you want full modled polygons with textures, lighting, shadows – basically full 3D, then that is where PaperVision is your tool. But if you need simple 3D rendering, and you don’t want to import a million classes and compromise performance, then that’s where FIVe3D comes in to play. Mathieu does a pretty good job at showing off it’s uses on his LAB site.

So back to what I’ve been up to. I’ve always been interested in doing niffty 3D stuff with Flash, so getting into this was right up my ally. One of the first things I did was extend the engine to render stereographically, as where you would be wearing those old time 3D glasses with the two different color lenses. With the FIVe3D classes, I’ve extended the Scene3D class, so instead of Scene3D you use AnaglyphScene3D and it will display the result for use with 3D glasses. So anything done with FIVe3D can be dropped into it and it should work perfectly. It’s set up so the user can change the offset of the two perspectives, as well as show the original rendering. I still have to tweak the way it applies all three together as right now it blows out the original image’s colors. Keep in mind too that in order to get the stereographic effect, you need to render the scene twice, so if you’re showing the original image as well then you’re rendering 3 times, and that can make the performance take quite a hit.

Anyways, here are two examples of what I’ve done with it. This first test just shows a bunch of cubes in space, but it runs pretty smoothly, even on my old G4 PowerBook. This next test can be pretty heavy, so be prepared for a browser crash if your computer is weak of heart. This depth demo takes a grayscale image and converts it into points with a z-depth based off of the brightness value. I’ll do some more with it, and will probably post some more test within the next few weeks.

example1.jpg

example2.jpg