I started playing around with doing some experiments with video and time delay. When showing the previously mentioned JPEG video experiment to my coworker Roushey, he mentioned how just the effect of the time delay was interesting. I was thinking about neat things to do with, and the first idea I had was to offset each frame as one row or column of pixels. So in a 640 x 480 video, the left-most column of pixels would be from the current frame, and the last rightmost column would be from 639 frames ago. The experiment is live here, and there’s a video of it on YouTube.
A few days ago I was thinking if there was a way to reproduce the effect of JPEG compression real time in flash. I figured the best way would be to literally encode the image with the native AS3 JPEGEncoder class. The problem there was that the returned ByteArray could not be interpreted as an image anymore. However, the Loader class does allow for loading a ByteArray and converting it to a Bitmap. And luckily, the loadBytes method let us load the ByteArray right from Flash (as opposed to having to save out a file and load it externally).
Every once and a while, I find that I need to just take a quick snapshot with my webcam at work. Since I’m on a PC, so I don’t have Apple’s pre-installed Photobooth app, or any other simple program (that I’m at least aware of). I also didn’t want to install any bulky 3rd party software that might come with the webcam that I’m using, since all I need in order for it to work is the driver, which more or less Windows installed all by itself. Photoshop has an option to capture from webcam, but the util is pretty limited as it only lets me capture a very small thumbnail size image.
Damn, it’s been a long time since I’ve had a site a launch. But I’m happy to announce my latest project at Firstborn has gone live: SoBe.com. I was the lead flash developer for this project. The current SoBe campaign also featured a facebook app with contest and a mobile version of the site as well (which were not developed by me, but Phil and Miller respectively).
There are two sides to the site. The product ‘flavors’ section features videos of real people stopped and interviewed on street about what they think of SoBe lifewater (which we, firstborn, filmed). We used streaming video from FMS provided by Akamai in order to provide responsive seekable video. This was actually a challenge, as streamed video has some slight programmatic behavior differences than progressive download. For instance, streamed video will not dispatch any Net Events for when it has reached the end of the stream. Another issue I ran into was that due to security restrictions, the BitmapData.draw function is prohibited unless specifically enabled on the FMS side. The other side of the site, ‘world’, was pretty straight forward, except that since we will be doing updates for the rest of the year the site needed to be extremely dynamic. This section of the site also features actress Ashley Greene in the nude, painted in two skinsuits to promote SoBe lifewater’s two new flavors.
Recently I did some playing around with video and thresholds, mostly just for fun. I first experimented with just changing the threshold of the video, essentially converting it to black and white. Then I tried comparing the current frame threshold and the previous frame threshold to get the difference. The result looks like an outline/edge effect. It’s an interesting way to visualize movement. Then, just for kicks, I decided to display the previous comparison and the current one, colorizing the current difference as red and the previous as cyan. The result is a very pseudo-3D outline effect. This fake 3D technique can also be applied to to straight up video, showing the previous frame as cyan and the current as red and screening the two. Since this all based on movement and the idea that objects closer to the camera appear to move greater distances due to their perspective, it’s very easy for it to display incorrectly.
Check out the first demo. You’ll need a webcam. Click on the video to change the mode, and the two scroll bars control the threshold level and amount of blur applied to the video before any image processing.
Recently I’ve been working on making a front end profanity filter. It makes more sense to have a filter be on the server side and return an accepted or rejected response, but since my strength is in AS3 I’ve decided to do it this way. There’s two parts I want to explain, the first being the reason for a profanity filter, and the second being the actual code. You can check out the demo here.
Did a quick test this weekend. I wanted to see if you have a player 9 swf and load it into a player 10 swf, if there could be any issues. Mainly, I wanted to see if there could be problems with the new 3D properties in player 10. So I just created a new player 9 swf that had some five3D elements in it, so essentially I had an extension of the Sprite class with properties such as z, rotationX, rotationY, and rotationZ. Since player 10’s Sprite class contains these properties nativity, my theory was that importing the player 9 swf would throw errors because now those 3D properties of the five3D elements would not be properly overridden. This is exactly what happened. Good to keep in mind in case you ever run into a situation where you might have to load an older flash 9 swf into a newer flash 10 shell.
I don’t know how this escaped me for so long, but the other day I realized that Flash doesn’t understand HTML character entity reference codes. Codes other than & and a few others won’t render out to their appropriate symbol. So for example, if I need to display a special character in an HTML page, I would normally use the &+code+; in order to display it. So “copyright ©2009 eric decker” would be “copyright © 2009 eric decker”. So it would make sense that if I bring in this copy into flash, set it to a textfield with the set htmlText method, that it would have the same result. However, it doesn’t. © will read exactly as ©. Flash does, however, understand numeric reference codes, which look like ©. Personally, I find the name-based entity codes easier to use and seem like they are more widely used (although I could be wrong about that, I’m not a HTML guy).
Anyways, determined to be able to use © instead of the more ambiguous © I wrote a class that is able to understand the entity codes. The reasons for this were 1. as I just mentioned, the name based codes are easier to read and 2. sometime you’re not in charge of your source copy, and it might contain entity based symbols instead of numeric and of course 3. developers are a little neurotic like that and sometimes ‘do it for the sake of doing it.’ (That could be a T-Shirt.) I was able to write a pretty simple converter using a regular expression and a lookup dictionary. You can download the class or view it as text.
The class contains one accessible static function that you pass your raw text to and it returns the new formatted text: myTextField.htmlText = HTMLTextUtils.formatHTMLTokens(myString); Better yet, if you have a custom TextField class that you’ve extended, you can override the set htmlText function to have it always use this function.
Grant Skinner’s RegExr is a huge help when doing anything with RegEx.
So over the last week or so, I restructured my 3D engine. Nothing that would be visibly noticeable, but I feel better about where it is at right now. What I like about it, though, is that it is still not what would be probably considered “the right way” to do it. The reason I am happy about this is that it makes it my own, and I understand exactly why and how it works. Once I have a really solid understanding, then I can move towards doing it the “correct way,” but for now, my little hacked together experiment makes me happy.
One of the changes I did make is to the shaders. I added a PseudoGouraudShader, which uses gradients to draw each polygon face to try and smooth out the hard lines created by using a flat shader. I doubt this method would ever produce 100% accurate results that are identical to a gouraud shader, but it makes for interesting results. The method I am using is pretty slow, as it needs to draw each face four times, three of them being gradient fills. Each vertex’s normal is calculated from the sum of all the faces it is a part of, and that brightness value is used to draw a radial gradient on that face, the center point being the vertex. (Two examples: RGB example and dot example, the circle areas are ‘painted’ on via gradient fill) This can produce some fun effects too when played with.
I had an example of the PseudoGouraudShader last week, but it was off, it looked like alternating faces had different brightness values. Just playing around last night I found the issue, it lies with how I was calculating my normal. When getting the cross product of the vectors, it is better to normalize the result instead of normalizing the vector before. The example below shows the two results and code snippets:
Still not perfect, but I’m not sure how close I can get using this method. I think there might still be some slight errors in the core math, which could be producing the results that look similar to the image on the left (although less intense). You can check out the live demo here.
On a last note, calculating normals and centroid over and over again for rendering can get pretty intense. In the case like this, the same normal gets calculated multiple times a render even though it hasn’t changed (because each vertex needs to get the normal of each face it is connect to). In order to boost speed, I created a caching system. When the normal is calculated, it saves it to a “stale normal” and updates an object (which I called the cached face) that tracks the position of each vertex in the face. Then, the next time the normal is called, if that cached face is not different then the real one, it just uses the stale normal instead of re-calculating.
Another experiment recently did was compress three grayscale images into one file. The reason for this would be of course file size and load time. A real-case scenario where you might need to load 3 grayscale images of the same size is loading textures for a 3D app. For instance, your model might be color dynamically, but you need to load a diffuse map, a specular map, and a bump map – all of these would be the same dimension and would be black and white.
So all you need to do is take your three grayscale images into photoshop and layer them on top of one another. Then use the gradient map image adjustment to have each image range from red, green, or blue to black. Have the base layer be flat black, and set each of the three layers to screen. Then, once that image is loaded into Flash, you can extract each color channel and set it back to black and white with the ColorMatrixFilter. You just need to set the matrix so that each of the R,G,B rows are [0,0,0,0,255] and the last row (the alpha) is [1,0,0,0,0] for red, [0,1,0,0,0] for green, or [0,0,1,0,0] for blue. The just smoosh that transparent white image into a black bitmap, and you’re set.
Again, not entirely useful for every day cases, and it does cut down on how easy and dynamic the loaded content is, but if you’re hurting due to file size and load time, this could be something to keep in mind. In case you’re wondering – the layered color image I used in my example is 188 KB, and the blue channel saved as its own grayscale file is 132 KB.