3D on the cheap(ish)

3D on the cheap(ish)

I'll preface this post with the disclaimer that this isn't a good way to shoot stereo and I wouldn't shoot anything I was getting paid to deliver with this setup. But I love experimenting and right now I'm looking for ways to independently shoot high quality stereo images that doesn't involve thousands and thousands in equipment rentals, insurance out the wazoo, and all the accompanying personnel to make it go.

For stereo acquisition, using DSLR's in the most cost effective beamsplitter we could find was certainly an avenue worth exploring but I'll say it again, a less than ideal way to get there. 

Peter Clark from Attic Studios and I did these stereo tests with Canon 5D's 3 or 4 months ago and I've been meaning to write this post ever since. Sort of like these NAB interviews that are sitting here in a Final Cut bin ;(

The company who provided the beam splitter, 3D Film Factory, wrote this blog post awhile back, publishing a truncated version of our findings. This post will attempt to go a little deeper into how we did it, what worked, and what didn't.

Here's how it all breaks down -

These are the problems with using DSLR's for stereo acquisition -

No way to genlock them, or in other words, force both sensors to begin scanning at the exact same moment in time. If you don't have genlock, you don't have 3D. Temporal offsets will kill the stereoscopic illusion so quickly derails any attempts to make 3D. Also, no timecode. While this isn't necessarily a deal breaker, it certainly makes syncing the left and right eye images together a heck of a lot easier. But as we discovered, there are some practical workarounds. 

These DSLR cameras are notorious for their Rolling Shutters. It's bad but only slightly worse than the Red One which was and still is to a certain extent, very commonly used for 3D. I basically approached using the Mark 2's in stereo like I would for Red One's - just make sure the sensors are both scanning in the same direction which means mounting the reflected eye camera from the top instead of the bottom (more on that in a bit).

The goal of this test was to try and use somewhat readily available or cheaply rented articles and to use software that most people are already using such as Final Cut Pro and Plural Eyes, without having to spend a bunch of money on custom plugins and codecs. Of course in the end, all of this bric-a-brac added up to a hefty sum and for all of the headaches and work arounds needed to make it function, in my mind it's not worth it. You're better off sweet talking someone with deep pockets into financing your "dream" 3D project and just renting a couple of Sony F3's and an Element Technica Pulsar. There's a reason we have pro gear in this business. Time saved not trying to make the hoopty rig go is time spent crafting the images the client is paying for. 

Here's what we used to make this happen:

IMG_4923.jpg

2x Canon 5D Mark 2's with Battery Grips so to avoid taking the cameras out of the rig to change batteries. 

AC Adapters would have been better but we didn't have them. Once the cameras are in, don't touch 'em! Aligning the Film Factory rig is a brutal chore. Make sure to record sound because this workflow needs audio tracks on both cameras to work. 

1x 3D Film Factory Beamsplitter Rig. 

IMG_4929.jpg

Gotta make sure that mirror is at 45 degrees. That's a great place to start. 

I'm not sure which Film Factory Rig we had, might have been the Mini. The rig itself is made up of readily available 80/20 Aluminum tubing, I would guess about 5 or 600 hundred dollars worth which means there's a significant margin on the rig if you were to buy one. While apparently an alignment plate does exist that will allow you to adjust pitch, height, and roll so as to match the position of one camera to the other, we didn't have it for this test. Without those adjustments, you're really up against the wall as far as your alignment goes. I was able to approximate an alignment using video overlays but there's not way to correct for foreground to background vertical offsets like using Z, Pitch, and Roll adjustments on an Element Technica rig. So in other words, if you do find yourself working with the 3D Film Factory, make sure to get the "Specialty Plate". 

2x Canon 50mm Lenses set to manual. 

A little wider would have been better but once again, this is what we had to work with. 

2x Pocket Wizards with Sync Shutter Cables. Very important.

Pocket_Wizard_Plus_II.jpeg

Here's where we got theoretical - As I mentioned, one of the main issues with using DSLR's for stereo is that there is no way to genlock them together. In order to create the illusion of binocular vision, the sensors on both cameras we're using need to start scanning at exactly the same moment in time. It's like syncing cameras together for live tv, all your AV devices need to be hitting on the same cylinder so that when the switcher goes to that camera, it's an instantaneous switch and not a frame or two or black or garbled signal while phase is being found. DSLR's have no ability on their own to be locked to an external signal but there's a device called a Pocket Wizard that's used to lock the shutters of multiple cameras to a strobe light. It works great for shooting stills so we thought that if we used the Pocket Wizard to shoot a few stills while video was rolling, theoretically that would align the shutters within the level of tolerance. After trying it a few times, we found that to be the case. It does work quite well actually and all the stereo video we shot with the Mark 2's had zero temporal sync issues. 

1x Heavy Tripod Legs to get the rig on to. It ain't light so a set of standard Ronford Bakers or heavy duty Sachtlers.

1x Consumer grade 3DTV. We had a Panasonic Viera with Active Shutter Glasses. I was hoping to be able to monitor the 3D images in real time on this display but there were some snafu's.

Here's the rig up and running. We had to use duvetyne scraps and black gaffer to make it light tight. Looking good.

IMG_4980.jpg
IMG_4972.jpg

2x Blackmagic HDMI to SDI Mini Converters. I needed these to convert the camera's HDMI out to SDI for use with the new AJA Hi5 3D "mini muxer". It takes 2 discrete SDI signals (i.e., left and right eye) and muses them into a single 1920x1080 raster that can be output in side by side, interleaved, etc for monitoring in stereo. This is a sweet little box and it would have been perfect if the Mark 2 didn't output some funky, irregular video signal. I couldn't get the box to take the converted signals for more than a few seconds. We tested with other SDI sources though and it works great so it was on to Plan B.

1x Leader LV5330 multi SDI monitor. 

This is probably the most expensive thing used on this test and is certainly not your typical "indie" tool. Because the muxer didn't like the Mark 2 signal I had no way of monitoring stereo in realtime but with my scope I knew that I could at least align the 2 cameras in the rig to the ZERO position. In 3D, you have to start from zero, both cameras must be seeing approximately the same frame and from there you can separate them to create interocular distance, thus creating stereo images. The Leader can take 2 SDI sources and freeze frames so what I did was set the left eye position, froze the frame, and then switched to the right eye and adjusted the position until it matched the overlay of the other camera. A crude alignment but a successful one. If you can't monitor both eyes simultaneously, I don't how else you would align other than the freeze frame method. The alignment was incredibly frustrating and involved wedging and shimming, sliding and taping. Basically creating a Frankensteinish like creation just to get a semblance of an alignment and this was only for the foreground. Fortunately the deepest thing in our scene was only about 25 feet from the lens so we were in the margin or error. I could tell by looking at it though that there were major offsets and if we had been outside with deep backgrounds, we would have been in trouble. 

While I was at, I also used the scope to match the picture profile of one camera to the other. There's always a lot of green/magenta shift when going through the beamsplitter mirror so if you have the means to correct for it at the source, it's always a good idea. I think for this I used my usual preferred Picture Style - Faithful with Contrast all the way down and Saturation down a few points. I then used the White Balance and Tint controls to dial in the best match I could create for the pair. 

Once I had Zero, I measured the lens to subject, lens to foreground, and lens to background distance. I then used the iPhone app, IOD Calc to find an Interocular distance that would put me safely within this range. Because there was no way to Converge, or tow-in the cameras without ruining the alignment, I just left them in Parallel knowing that I could always adjust the convergence in post. I set my IO distance and we were ready to shoot. Because there was no way to monitor in 3D we just kind of winged it and hoped for the best. 

IMG_5164.jpg

POST:

1x license of Final Cut Pro / Compressor. 

We wanted to see stereo bad so after shooting a few tests, we took the Left and Right eye images into Compressor and made ProRes files making sure that the audio recorded to the cameras was embedded in the new files. 

1x license of PluralEyes. 

Next we imported all of the transcoded material into FCP, and set up a timeline for PluralEyes sync. Because the audio is the same on both left and right eye images, PluralEyes does a frame accurate sync and from there it's only a matter of getting them into a stereo pair somehow. 

fcp_3d.jpg

There are a lot of ways to do this but I didn't want to spend any money on plugins so I thought how can we easily create a Side by Side 1920x1080 video right in FCP. It's actually incredibly easy. PluralEyes put the 2 image streams in 2 separate video tracks and synced them together so they're on the same frame. Now take the left eye video, in the Viewer go to the Motion tab, go to Distort and then in Aspect Ratio type in 100. You now have an anamorphically squeezed one half of a stereo video signal. You need to get it on the left side though so in Center type -480. This will place it on the left edge of frame and it occupy exactly 50% of it. Now with the Right Eye, first do a little Flip Flop Effect to get it in the right orientation. Any time you're dealing wtih mirrors there is always image inversion. In the right eye video's parameters do the same thing but in Center type 480 instead of -480. You now have a stereo pair in your timeline. You can't really make screen plane adjustments to them, at least not easily, and in order to edit them, they would need to be output again so that 2 images get baked into the same raster. But it does work and if you have FCP, you don't need anything else. 

1x Matrox MXO2 LE. 

Now the fun part - Watching your stuff in 3D. In order to watch stereo from your timeline, you've got to have some sort of external hardware that will get the video signal off you computer and into an HDMI or SDI cable. I've had the Matrox box for a few years and too my delight, they keep adding functionality to it free of charge all the time. They recently added 3D support to the HDMI output so if you have Side by Side media in FCP, the Matrox can send it to a HDMI receiver like a 3DTV and flag the signal as stereo so that it knows how to display it. This worked great with the DIY Side by Side's I made in FCP and we were watching stereo from my laptop in realtime. Awesome. 

IMG_5020.jpg
andrea1.jpg

Model: Andrea Grant

That's it in a nut shell. It's always fun to experiment with this stuff. Like I said, this wouldn't be my first choice but in a pinch, you could make it work. 

Peter will be publishing the stereo video from these tests online at some point. I'll post when I have it. 

Canon Picture Styles and Chroma Du Monde

Canon Picture Styles and Chroma Du Monde

I was curious about the real differences on the video level between the various Canon HDSLR Picture Styles so I set up the DSC Labs RED CamBook Chroma Du Monde 28R and evenly lit it with ambient daylight.

redbook2.jpg

Converting the camera's HDMI OUT to SDI, I set the exposure in the recommended way for the chart, putting the grayscale's crossing point at just under 60% on the waveform. I then made no exposure changes and just dialed through the various Canon Picture Styles capturing the waveform and vectorscope data from the Leader monitor.

Camera and Lens: Canon 5D Mark 2, Canon 24-105mm f/4 Zoom

All styles were adjusted to the following "standard practice" specs:

Contrast: all the way down

Sharpness: all the way down

Saturation: down 2 points

For Neutral and Faithful, the two "out of the box" Picture Styles that I find to be well suited for video, I looked at Saturation -1 as well to see how much difference 1 point makes on the scopes. The answer is a lot. Most people would agree that the colors on these cameras are over saturated and need to be backed off a bit to look more natural and less video. When adjusting a video camera's colorimetry using DSC charts, the theoretical goal is to ensure faithful color reproduction by aligning the primary colors into their targets. The color response that this creates however may not be suitable for all projects it may even look a bit over saturated compared to the low sat "film like" color matrices found in many prosumer camcorders.

Have a look:

canon_vector2.jpg

Let's have a closer look at Faithful, Saturation -1

faithful1.jpg
f1_vector_target.jpg

This is about as close as we can get to hitting our targets with the Canon 5D Mark 2. In my opinion this setting doesn't look as nice as Saturation -2 so as always, your eye is really your best tool for image evaluation.

Canon Picture Styles: Looking at these thumbnails alone, it's a little hard to tell the difference.

captures_comp.jpg

Waveforms: Glancing at these however you can see that there is a small difference in gamma response from Style to Style, Neutral being the most compressed in comparison.

canon_wfm.jpg

Vectorscope: Even with the saturation turned down, Portrait and Landscape are very extreme color looks compared to the more muted tones of Neutral and Standard. Faithful on the other hand, does what it say it does and offers the most accurately aligned video colors of the bunch.

canon_vector.jpg

In order to really see the differences though, you need to look at the chart in the context of video. Here's a file for you, feel free to download it and take into FCP where you can open up the scopes and really scrutinize the differences between Picture Styles.

Note: RED CamBook has a highly reflective surface so must be angled back to avoid seeing yourself in it which is why the image is skewed in this video. . There is a small amount of gamma lift that happens in Vimeo upon conversion to their format. Why does it do that? I don't know and I'm still searching for a workaround.

Canon 5D Mark 2 Picture Styles and Chroma Du Monde from Ben Cain / Negative Spaces on Vimeo.

As for the Canon 7D and 1D Mark IV? From what I discovered from the tests Jem and I did at Sekonic, the color response from the various Picture Styles is very similar to the results found here.

Interesting Canon Tidbit..

Interesting Canon Tidbit..

Jem Schofield and I were at Sekonic USA the other day testing several of their light meters against multiple Canon HDSLR's and we found an interesting little fact. 

The test was very basic - evenly light a Kodak 18% reflectance gray card, convert the HDMI video signal to SDI and then measure it on the waveform, set the exposure in the camera so that card reads an even 47% (47 IRE) across, match the ISO and shutter speed on the Sekonic meter to the camera, take an incident light reading and see if the F Stop on the meter is the same on the camera. 

We tested 4x Canon 7D's and 2x Canon 5D Mark 2's. With every single camera, setting the stop on the lens to the incident light reading on the gray card rendered the middle gray correctly in the 45-50 IRE zone. 

What this means is that Sekonic meters, out of the box, are very well calibrated to the standard ISO table as are Canon cameras. I'm much more of a waveform user but if I don't have access to my scope, knowing that I can accurately and repeatably use a light meter to help me judge exposure on these sometimes difficult to evaluate cameras is good to know. 

Here's what's interesting though - as you know, these Canon cameras include a built in multi-point reflective light meter that you can call up when rolling video by depressing the shutter button halfway. I've gotten in the habit of using this a lot and have found that setting the exposure to -1/3 or -2/3 under from the middle, "prefect exposure" zone has yielded the best results. On all 6 cameras, setting the ideal exposure on the gray card and then checking the camera's internal reflected reading yielded the same results, -1/3 underexposed. See illustrations below:

canon_gray2.jpg
canon_gray1.jpg

To re-hash, on all 6 Canon cameras we tested, setting the gray card to an ideal exposure using the waveform monitor resulted in the camera's internal meter telling us that the image was 1/3 stop underexposed. In my experience using this built-in camera meter, setting an exposure in this 1/3 or 2/3 underexposed zone has yielded the best results so this test basically confirmed how I was already working with the camera. 

What this means is that if you're exposing for midtones and using the built-in Canon meter, if you set your exposure in the middle of the scale, you'll be overexposing by about 1/3 stop. Just something to be aware of.