Friday, June 11, 2010

Project 1 revisited (almost)

This is too late to be useful but it's still interesting so here it is:

So I completely intended on beefing up project one but it looks like it won't make by the end of the term. All the work is done but the rendering is still 3 or 4 days out. Here's the story:

I decided I wanted to fix up the following items:

1) Better lighting using HDRi light probe - I went out took panorama shots of my shooting location on a similarly bright day and stitched them together using Autodesk Stitcher Unlimited (free 15 day trial - kinda cool interface - dual platform - crashes a bit ) (There are many other options for this task)



Artistic disclaimer: I wasn't real specific with what I shot and I only needed vaguely correct lighting, not a perfect panorama since it's not viewed directly. Also, I didn't collect different shots of the same thing with different exposures for creating HDR info. You can do this but its time consuming and for my purposes not necessary. Content wise ,there was enough 'correct' info in it to give convincing interesting looking reflections and besides that I like this way so there.


I then created a 'light probe' which is like a spherical projection (But 360 degrees instead of 180?) A lit probe is an angular map type graphic. (like taking a picture of a globe and using it as a flat map of the earth except you get both sides)

This was created with HDRshop (free but windows only) just as a LDR light point (no HDR info) but Blender was just happy with that. Note, the resolution on these things doesn't have to be high at all, I made mine much higher than necessary because I thought it was interesting looking. The one piece of missing info from the guides is the source panorama is called a lattitude longitude map and you do a conversion from that to angular map/light probe. You don't need HDR type graphic to make this conversion.

Very nice and pretty. These will do the lighting for me and create accurate believable reflections too. The trade off is rendering in Blender takes forever. I've had 4 computers at this for 5 days and I'm still only 40% of the way there. When the lab machines clear up after finals I should make a bit more progress. [Edit] It's Friday and my lab codes have already expired. I've probably already lost any progress they made as they get wiped this weekend. So make that 20% done.


More info + guides here:

http://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/HDRi
http://www.google.com/translate?u=http%3A%2F%2Fblenderclan.tuxfamily.org%2Fhtml%2Fmodules%2Fcontent%2F%3Fid%3D12&langpair=fr|en&hl=en&ie=UTF8
http://blenderartists.org/forum/showthread.php?t=172030
http://wiki.blender.org/index.php/Doc:Manual/Lighting/Ambient_Occlusion#Ambient_Colour
http://debevec.org/Probes/
Yafray: http://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/Yafray_Render_Options

old regular lighting:
new HDRi lighting:



2) Slight color correction of original video to make it less grey (done)

3) a shadow under the car (easy enough)

4) An oscilloscope instead of animated lips for the transformer. Besides looking way cool it helps cue in who's talking at any given moment since the dubbed voices make it seem like bad anime. (Like there's good anime?) I used processing to create the frames and saved them out in 1/24 second intervals. I used another processing sketch I wrote to replace all black pixels with alpha 0 pixels. This image sequence gets fed into Blender as a texture and with some careful math everything lines up nicely. I can't wait to see this all put together. I'll post the processing code at the end of this post.




5) Last but not least is a little better sync on the match motion. This will be done rotoscope style in after effects. I have started this the old data but realized just in case I should wait till I have the new stuff so I'm not wasting time. This is the only step not yet done (besides waiting for rendering) ----Oh, and I originally used a program called PFHoe for match moving. It's by the pixel farm (they make PFTrack: (-Cloverfield-) ) It's waaaaaaaaaaaaaaay faster (and more stable) than Isadora. Does cost money though to use it officially. In any case they have an excellent tutorial that anyone interested in match moving should check out regardless of what program they use as it explains simply a lot of the concepts involved in match moving. (focal length importance: lense distorition, parallax---and it's under 10 min so you don't get bored)

SO there you have it. Project 1 Almost redone but not quite. I spent a bit of time on this that I should have really spent on project 2 which kinda sucks because it's the end of term I still don't have anything done to show for it, but anywayses, when It's finished I'll post it here for archive completeness.

That's it for official blog entries. Everything from now on will be Bonus ramblings! Thanks to everyone, I enjoyed and appreciated this class very much, the buiding projection show was amazing - great work everybody.

-=-===--=-=-=-=-=-=-=-=-=-
STOP READING HERE
=-=-=-=-=-=-=-=-=-=-=-=-=-

Processing code mentioned above (cut and paste should work - params in code will need to be set)
1) ------------Oscilloscope image sequence from audio file:--------------



//used to create osciliscope movie of sound
//note: You must manually close the window when done to stop recording frames

//import ddf.minim.signals.*;
import ddf.minim.*;
//import ddf.minim.analysis.*;
//import ddf.minim.effects.*;


//originally sound file was just the built in mic but modified to open a sound file

Minim minim;
//AudioInput in;
AudioPlayer in;

void setup()
{
size(512, 200, P3D);
frameRate(24); //make it film friendly 24fps

minim = new Minim(this);
minim.debugOn();

// get a line in from Minim, default bit depth is 16
//in = minim.getLineIn(Minim.STEREO, 512);//use built in mic

in = minim.loadFile("Submix1.wav"); //wav or mp3 (no aiff)
delay(1000); //just to make sure the processors caught up (prolly not neccesarry)
in.play();

}

void draw()
{
background(0);//black backgrond
stroke(3,255,12);//mostly green line

// draw the waveforms
for(int i = 0; i <>

2) --------------Replace black pixels with alpha channel---------------




//Batch processes a set of images (png)...
//...replacing any black pixels with an alpha background
//(note: currently 9999 image limit - easy to raise to whatever you want)
//Clay Kent 2010 based on code from http://processing.org/discourse/yabb2/YaBB.pl?num=1194820706/4


//import processing.opengl.*;
PGraphics alphaImage;
int startFrame = 1; //set this..
int endFrame = 3123;//.. this..
String fileNamePrefix = "frame-" ; //.. and this (example for "frame-1234.png or frame-0001.png"
String fileTypeSuffix = ".png" ;
//note this is setup for png but anything I think tiffs are supported too (jpgs work as well but contain no alpha info)

PImage img;
int currentFrame;

void setup()
{
size(512, 200); //..oh yeah and this - frame size goes here
colorMode(HSB,255);
currentFrame=startFrame-1;

}

void draw()
{
currentFrame++;
if(currentFrame>=endFrame)
{
println("done");
exit();
}


image(loadImage(fileNamePrefix+ nf(currentFrame,4) +fileTypeSuffix), 0, 0, width, height);
alphaImage = createGraphics(width, height, P2D);//frame size could be set here too (including shrinking expanding etc)
alphaImage.beginDraw();


loadPixels();
alphaImage.loadPixels();
float h,s,b;
for (int i=0; i0) alphaImage.pixels[i]=color(h,s,b,255);
else alphaImage.pixels[i]=color(h,s,b,0);
// re-use the brightness value as the alpha --
// (since the pixel array, strictly speaking,
// does not contain alpha values (whoops.)
// in this example, if the brightness is 0,
// use 0 alpha, otherwise use full alpha.
}
alphaImage.updatePixels();
alphaImage.endDraw();

println(fileNamePrefix+"_alpha"+ nf(currentFrame,4) +fileTypeSuffix);
alphaImage.save(fileNamePrefix+"_alpha"+ nf(currentFrame,4) +fileTypeSuffix); //tweek name to suit



}





3) ---------------Create Movie from image sequence (so you don't have to do it in Ae, FCP, QT-Pro etc) note - no alpha channel support--------------





//Batch processes a set of images (png)...
//...creating a quicktime movie file in with the animation codec
//place source files in this folder (where the .pde file is) and set the vars below, then run
//(note: currently 9999 image limit - easy to raise to whatever you want)
//Clay Kent 2010


import processing.video.*;
MovieMaker mm;
//import processing.opengl.*; //note OPENGL seems to crash this??
int startFrame = 1; //set this..
int endFrame = 200;//.. this..
String fileNamePrefix = "" ; //.. and this (example for "frame-1234.png or frame-0001.png"
String fileTypeSuffix = ".png" ;
//note this is setup for png but .jpg is supported and possibly .tiff

PImage img;
int currentFrame;

void setup()
{
size(800, 600); //..oh yeah and this - frame size goes here //don't use OPENGL- weird bug

currentFrame=startFrame;
// mm = new MovieMaker(this, width, height, "drawing.mov");
mm = new MovieMaker(this, width, height, "imageSequencedMovie.mov", 24, MovieMaker.ANIMATION, MovieMaker.HIGH,24);

}

void draw()
{

if(currentFrame>=endFrame)
{
println("done");
mm.finish();
exit();
}

background(0);
image(loadImage(fileNamePrefix+ nf(currentFrame,4) +fileTypeSuffix), 0, 0, width, height);
println(fileNamePrefix+ nf(currentFrame,4) +fileTypeSuffix);
//saveFrame(fileNamePrefix+ nf(currentFrame,4) +"_alpha"+fileTypeSuffix); //tweek name to suit //("frame-####.png");
mm.addFrame();
currentFrame++;

}



Thursday, June 10, 2010

Project 2: done

The computers rendering out the H264 and I'm all done in time for a good nights sleep before tomorrrrrr..............&&^%#$. Oh well, time for a nap. That devine inspiration I was waiting for never came, instead you're all getting Elvis......... I did use that audio to IPO curve script thing though. That part worked out all right.

Building Projection from Clay Kent on Vimeo.




Anyways......Heres some 'making of' shots:






Thursday, May 27, 2010

Project 2 Update

I haven't posted in forever so here is an update of various things 3D

For the building projection project 2, I'm looking into a program called VVVV It's windows only but it looks like it was important in most of the 3d building projection videos you see on you tube. It uses 'nodal' programing like Max or PD or Isadora and it can deal with 3D meshes effects/texturing/whatever in real time

This page describes the process of creating building projections.

It's getting a little late to take on any new technology so I'll have to see if this software is friendly enough to use before I commit to it for my project. In any case, the first step appears to be creating a 3D model of the surface being projected on. So I corrected my cell phone pic (pillow/pincusion distortion and source angle) of Villard to be more accurate and I am now starting to create that in blender. Feel free to use this, I have no idea if it's any better /worse than the official pic but I wanted to try it.

So now it's off to Blender to create the 3D version of this. Unless anyone has already done this? Anyone? Anyone? I'd be willing to trade collaborate something programming or engineerings wise? Yes, I'm lazy, I wish I had the attention span for extremely detailed modeling, I envy people who do. Instead, I get distracted trying to figure out other ways to do stuff like this like.........

Check out this way of automatically created 3D structure from real objects

extremely cool interactive point cloud thing
http://www.openprocessing.org/visuals/?visualID=1995

how to:
http://www.instructables.com/id/Structured-Light-3D-Scanning/

http://createdigitalmotion.com/2009/02/simple-diy-3d-scanning-projector-camera-processing/

http://www.openprocessing.org/visuals/?visualID=1014

master site: bit terse itself but links everywhere else
http://sites.google.com/site/structuredlight/implementations

Wednesday, May 5, 2010

Transform Sequence take 2

This isn't hopefully not my final project One but it's my emergency backup in case my tech problems keep persisting. But, this is in pretty pretty HD so be sure to see it full screen.

Xform from Clay Kent on Vimeo.




My original idea used match moving and mixing in real video quite a bit which technology wise isn't working out so great yet. I've tried Icarus on mac and PC, voodoo on PC, every setting imaginable and soooooooooooo much time but both those programs eventually just crash with no results when there's anything longer than 10 seconds. I'm going to reshoot my video this weekend with no camera movements and without match moving so at least I can turn something in that resembles my original idea. Perhaps use fake camera shaking in after effects or something to simulate it. Anyways, enjoy the work in progress...

Thursday, April 29, 2010

Text Render

Blender text render test from Clay Kent on Vimeo.



This will eventually go into my old project when I get around to rerendering it.

24p fix

For my project I am using live action with blender stuff composted in. I have a Cannon Vixia HV30 High def camcorder that takes great 1080 resolution video. It even does 24p - the option is somewhat buried, but it's in there. Problem is getting 24 p out of a camcorder into anything is a serious PITA. You'd think it would be automatic but not yet. The 24p comes coded out as 60i. Pulldowns and reverse telecine are big long blog entries in their own right (which aren't needed at all in HD digital video but for some reason they are still there) so I won't go into it other than saying that's the stuff you have to undo to get your 24p back. So if you want to get at your 24p video (like to match my blender renders) you need to decode it somehow. After much research I came across a couple ways to do it in compresor and cinema tools, (I think after effects can do it too) but then found much much easier way. Get yourself this freeware program called JES Deinterlacer


---------GET THIS------ http://www.xs4all.nl/~jeschot/home.html ---------GET THIS------


Launch it up, load your movie, select the project box, select reverse telecine and let it do it's thing. And voila, 24p restored. No more weird jaggies and hard to track video.

One more note, In yet another hold over from analog video, quicktime's default display for 1080 lines of resolution is actually 1062 to hide possible analog edge broadcast distortions. (Has 1080 video ever been broadcast in analog? Who made this nonsense a standard?) You don't need it and can get your 1080 back fairly easily. To get rid of this, open the clip in quicktime, cmd-J for the movie property windows thing (or from the windows menu), goto to aperture conform and select 'production' instead of 'clean' and instantly your 1080 lines come back. Save your changed movie and your all done. Then, take it into FCP or wherever and enjoy 24p. Better directions and more info here

You may also have noticed that instead of 1920 columns you have 1440 or something. That is becuase the HDV codec used in the camera uses non-square pixels which when rendered out come out to 1920. Nothing you can do about that. So no true HD yet. (ughhhh I want a red so bad)

Some better written info on all of this here:
http://eugenia.gnomefiles.org/2007/07/13/canon-hv20-24p-pulldown/

Wednesday, April 28, 2010

Project One progress

Very rough transformer sequence. Rendered on a netbook and assembled with windows movie maker......


(sorry for non vimeo video but Its late and I want some sleep and vimeo wont start converting for another half hour and the source quality isn't that great anyway and its a full moon ......)