I probably should have mentioned this way earlier, but I’ll be speaking along with Mathieu Badimon at OFFF Paris on behalf of Firstborn. Our presentation will be on Friday, June 25th at 12:30. I have to say, this is quite an honor to speak at this conference. If you’re interested, there should be more information on the OFFF Paris 2010 website.
I started to play around with pixel bender a few weeks ago, but didn’t have much time to really accomplish anything interesting. However, I came across the opportunity (or rather excuse) to use it to whip up a quick filter for my current project. Basically, we have a bunch of transparent png images of people or objects. In the images, the figures or objects cast shadows, but the shadow’s transparency is not taken into account (think of it more like a .gif where transparency is either true or false rather than smooth). So I wanted a way to take a selection of the image, and basically have it translate the brigtness value to alpha. The result would be similiar to using a multiply blend mode, if blend modes could be used with only one layer. Anyways, I couldn’t find a way in Photoshop to accomplish the effect to my liking, so I wrote a super simple pixel bender filter, imported it, and voilà, I had exactly what I needed. Pixel Bender is going to be a great too for player 10, but I think there could be a lot of interesting and practical uses for it even outside of Flash.
Here’s the filter (right/option click and save as, I guess Firefox and the like recognize it as a text file) if you’d like, as well as an image that hopefully clarifies what I was trying to describe above.
A little while ago I went to FITC Toronto 08 festival. FITC is an annual ‘design and technology festival.’ It used to stand for Flash in the Can[ada] but since Adobe has acquired Macromedia the festival has been broadened to include other technologies such as motion graphics, 3D, experimental (processing), installations, etc. It’s run on it’s own by Adobe is a big sponsor.
There were a lot of great presentations this year. Mario Klingman had a great presentationa again, this time focusing on a.viary on which he is working on the Pattern Generator. Eric Natzke showed some of his recent visual Flash work, and Robert Hodgin showed what he’s been doing with Processing lately with music, flocking, and pigeons. Keith Peters gave a talk about fractals and some crazy math as well as an Air app he made to demonstrate both. Dr. Whoo Hoo talked about some crazy stuff with Illustrator and Flash communicting, such as having a swf run actions in an open .ai file. I have some ideas I want to try with that when I get some time. (Illustrator games anyone?) North Kingdom gave a great presentation of what they’ve been doing and what life is like in Skellefteå in Sweden. I picked up a print by Scott Hanson after his presentation and still need to finish the frame for it. There were other great ones too, as well as some great insight to the future of Flash. (I also can’t forget the 2nd night party that had the Junior Boys, a band that I love)
Mentioned at the keynote presentation was some cool features for Flash 10. One is better timeline control, which will act more like after effects. When MovieClips are tweened, there are now bevier paths on the stage that allow for easily easin, curving, and changing. There will also be build in IK for puppeting that will also export for runtime. Another long awaited feature is Z-depth and simple 3D, so now you can have planes in space. I looks a lot more basic than PaperVision, but the key thing is that it is native to Flash and the IDE.
Guitar Genetics is the project I’ve been working on for the last six weeks or so combining an electric guitar and Processing. I feed an electric guitar though a headphone amplifier and into the line in on my laptop. (I also split it before the headphone amp to a read amp so as you play you can hear it). Using the Fast Fourier Transform from the Sonia class, I made a visualization that resembles DNA patterns. The color of each plot depends on which string on the guitar is plucked, and the vertical position is based on the fret. Since I am using pseudo-note detection, the program reads extra notes most of the time which produce more interesting visuals. To create a print, all the note information from a recording session is saved to a XML file, which is then run though my render engine I built, so I can produce high resolution prints. I have a more detailed case study below, or you can download it as a PDF.
There are more images on flickr.
I took an elective though the glass program this quarter, and got to spend 10 weeks working in a flame shop and hot shop. The flame shop is where people use torches to make little things, like dolphins and marbles. Hot shop is where people do glass blowing with furnaces. I have to say that the hot shop was my pretty cool. It’s amazing – the furnaces that house the melted glass run at something like 2,100º and once they are turned on are never turned off. They have to be hooked up to a generator if the power fails. If they do turn off and the glass cools, it contracts and literally makes the whole furnace implode. So you can imagine getting right up to that furnace is pretty intense. Now I’m not the best at glass blowing, but the whole experience was great, so if you ever have the change to take a class in glass blowing or are in Corning where you can take a class at one of the glass shops there, do it.
(left to right: lopsided paperweight with ‘party mix’ colored frit, glass grass sculpture made in flame shop, ugly bottle made of golden glass)
Jason Arena has us at it again, making crazy stuff. This time around I am using Processing again (It’s just so much fun) and will be using an electric guitar as an input device. It is simple to set up and required me only to buy a 6′ male to male stereo 1/4″ jack wire and find a 9 volt battery to repair a headphone amplifier. The electric guitar outputs to the headphone amp, which then outputs to the mic input of my laptop. Then, using the Sonia library, I take that data and do whatever I want. As of now I am working on note detection, which cannot be done though pitch so I have to analyze the frequency spectrum. I’ll have a video up shortly demonstrating it.
Project 1 is wrapping up for my Virtual Entertainment class. I’ve picked my favorites (it was hard) and have made an image that will be of print quality. Our class will be printing them out in 8, 5, and 3 inch squares and then displaying them. In order to make a 8″ x 8′ square, I needed a 2400 x 2400 px image. Here’s an example of the image scaled down and a section of it 100%.
Here are a handful of some images “I” have made with my ImageMapper program. I figured that I would show what the photo I used was to generate the image. Kind of takes all the magic away from it, seeing them side by side. Again, there are more on flickr.
Joshua Tree National Park
I’ve been playing around more with Processing, and I’ve settled on something I like for my project for Virtual Entertainment. I am going to be using my ‘Image Mapper’ program that I was working with that maps out all the same colors in an image. I can put any image I want into it, but right now it is set up for 900 x 900. I can let it render by itself, or I can control the sample point by using the mouse. I have it write out a PDF of it’s progress every few minutes, so it basically renders it out in steps. I can then bring it together in Illustrator (saving out the whole thing as one PDF makes the app crash).
The way I’ve been making images until now has been with a modified batch version, which I give it an array of image files and for how long I want it to draw each one, and it just saves out it’s progress as a png file. This lets me see how a handful of images will look by letting it run overnight or while I am away.
Here is a demo of how it works. You can click and drag to control the sample point, and the up and down arrows control the range of the random while you’re clicking and dragging. Below are some of my favorites that have come out:
There are more examples in my flickr gallery.
About a year ago, I was playing around with the Flash drawing API. I was taking images, feeding them into Flash, and using getPixel to grab the color, draw random lines around to make a scribbled version of the image. It looked pretty neat, and I wanted to look into more complex ideas, such as connecting similar pixels. Of course, this killed flash as a 600 x 400 image created a pixel array with 240,000 values – Flash just can’t efficiently handle it.
But Processing can. So, I started with connecting all the pixels with the same color, and then moved to ones with similar color (based off difference of hue, brightness, and saturation). I’ll post more as I progress. You can see a start of the applet here: imageMapper_02.