A Video Workflow for the RED

A Video Workflow for the RED

October 21, 2012

Despite rather infrequent posting, I still really enjoy writing this blog. It's something I do purely for my own desire to arrive at a more in-depth understanding of the problems we encounter in the field. Lately I've had very little time for this project combined with an ever expanding list of topics I'd like to cover. Moving into the new year, something I've been wanting to do is to open this site up to other writers who have ideas for pertinent, appropriate content. Interested? Let me know. 

My first guest writer is Tom Wong, a Local 600 DIT working here on the east coast. He recently engineered a short film I produced with Kea Alcock and Zina Brown where he utilized a newly available on-set workflow for the EPIC camera which I'm excited about. My preferred way of working as an on-set colorist relies heavily on the immediacy of video where everyone on the set sees an image that will be reflected in dailies. Because of the post-centric workflow of the RED, this is not something that could be easily achieved until now. By using the software combination of LiveGrade and Scratch Lab, it's finally feasible to work in a more traditional, "video style" color correction workflow for the EPIC. For those who like to work this way, this is great. The advantage being that the Director of Photography can now make color decisions with the DIT as they are lighting and composing shots instead of taking the time, usually at the end of the day, to set the look for dailies before files are processed. 

Thanks to director, Zina Brown, for letting me share images from his new film, Dreams of the Last Butterflies.

RED WORKFLOW

When doing looks with r3d files, we light the scene ideally with the subjects in there, create a test clip, put it to the computer and then apply color work on it. You can judge the exposure from there, or use the built in camera tools, then apply your look, get it approved, and then start applying that to the rest of the clips and do match from there.  You can do your looks via Redcine X Pro, third party software like Scratch Lab, Da Vinci Resolve, Colorfront OSD/EXD, etc. Generally, you have access for metadata controls for the RAW to baseline your look, before you apply RGB alterations to it. This can produce cleaner, better results when working with r3d because you aren’t jumping into forcing the values.

LIVE GRADING WORKFLOW

With a camera like the Alexa, or anything similar where you are dealing with a "baked" RGB acquistion format in Log, you output a log signal from the camera, monitor that, paint that image using software/hardware. So it goes Log from camera to your cart, LUT, and you then output it back out from your cart to whoever needs it. You save your looks as CDL or 3DLUT, and this color correction data is then passed on through the chain for use with dailies, etc. 

This method is one of the most common ways of working. It’s not limited to internal ProRes recording either, it’s also commonly used with ARRIRAW.

So when working with the traditional RED workflow, I like to call that a “untethered “ workflow. You aren’t paintboxing the image on a live signal. You shoot something, bring it to the computer, and then process there. All that information is stored within the metadata if you are using redcine x, and if most finishing software can just open up the r3d, and the RMD (metadata) information is right there automatically putting the finishing colorist in a good spot to start with. For other cameras this can be a little more difficult since RED has designed their workflow from the ground up to be the way it is. Passing along color decisions down the whole pipeline untethered  can be done, but not quite as streamlined as dealing with r3d. 

HYBRID VIDEO AND R3D WORKFLOW: WHY AND HOW               

My goal this past year having done so many jobs the tethered way, was to figure out how to accomplish the same thing with working with EPIC. How can we start merging the workflows at the beginning of the chain?

WHY

Why fix it when it’s not broken you may ask? Well to me it’s always been a little broken. Honestly, and I’m sure many can agree with me, it’s tough to get people to run over to your cart to sign off on looks. DP’s are busy on set, Directors as well, and it diminishes client confidence when they look at a monitor, don’t like what they see, but have to force them to run over to the cart to show them what it REALLY will look like for the dailies and the starting point in the finish. Being tethered, that is color correcting a live video signal, keeps everybody on the same page, has everybody look at approximately the same thing and let’s you make adjustments immedietely. Thus affecting lighting decisions right away, compensating when exposure changes, etc . It’s instant and immediate, and I really believe you gain a lot more with this method.

Yes, you can load looks directly into the RED cameras, but you have to do it everytime you make any changes to the LUT. Ths is impractical in application. 

Working on a live signal is just faster and problems that can be solved with lighting are easier to identify. 

HOW

Back when Pomfort was beta testing LiveGrade, a now very popular software for on set live signal color correction, I was going through it and listing things I personally want from the software. One of the suggestions was figuring out a way to come up with Redgamma delog lut so that I can start applying the same workflow I’ve been doing in a “video” tethered workflow, avoiding and bypassing the Redcine X method all together. Originally I said it if we could save RMD’s out of LiveGrade that would be stellar. Unfortunetely that’s a all RED format and no one has their SDK to create these files outside of RED software. But, Pomfort was still paying attention, and in the latest update, they added new preset modes in your delog menu. I saw the variety of cameras in there, Alexa C-Log, Canon Log, S-Log, S-Log2, and then...Redgamma2, Redgamma3.

red_delog.jpg

So immedietely I ran tests, starting outputting redlogfilm from my friend’s Epic, put it through my whole chain of hdlink, livegrade, etc. And starting building cdl for the live image and using the preset of redgamma3, and created dailies from it.

Few disclaimers right now. The delog is built into LiveGrade’s coding, you need to export the delog out without any cdl directly out of LiveGrade to get it as a standard .cube file. This is important because if you simply set the metadata to redgamma 3 in a dailies creation software or even in a finishing color suite, the cdl will come in on top of the delog instead of coming in before it. So you are doing a pre linearization instead of af post linearization, your cdl’s won’t line up.  Another note is that Pomfort’s delog is close to being a stop more crushed in the blacks then compared to the camera, but color values all remain identical. I actually don’t consider this a bad thing either because it forces more light to come into the image to get you out of thse noise floor, and I can always bring it back up if needed since it is just a LUT. But I’ll be talking to Pomfort about getting this more exact.

So I create my cdl, check out my image on the monitor coming from camera going throug LiveGrade, and bring it into Scratch Lab. I set the delog as a grading lut, or you can set it as a output lut. And then load the corresponding CDL file. It lines up perfectly. All the quicktimes I generate, look  exactly as they should.  So now this software combination has merged a r3d workflow into a existing pipepline that never really meshed at all with how we typically work with R3D.

All the finishing colorist has to do now is set all the r3d’s in his timeline to redlog film, set the redgamma 3 delog for the clip, load the cdl, and it will line up perfectly. Exactly the same way as they do with Alexa or any other camera.

This method can be used in a VFX pipeline as well, where you have to make DPX stacks from the R3D. The LUTS can just be applied with no extra work with redlogfilm based DPX files, and  now your color and looks management has been simplified. (Let’s  face it, not everybody can be native R3D all the time. 5k compressed wavelet is way too much for huge some vfx pipelines, you can’t always be native in R3D) On the flips side, we aren’t even deviating from the advantages of RAW either. You can still modify metadata information and anything that has to go DPX for a quicker back and forth with whatever is part of the pipeline, the LUTS will translate all the color decisions throughout.  

So a opportunity came up where I wanted to offer up my services to a production that was doing a short film that had great treatment, high concept and already had people involved that I wanted to work with.  Zina Brown was the director, and had been working on putting a film together called “Dreams of the Last Butterflies”  I’ve seen and have had helped him with some finishing color on some of his work before and really loved the kind of films he made. Producing on this was Ben Cain, and DPing was Timur Civan. The shoot would be fairly fast paced, the crew was small, and I knew that it might get really scattered and I wouldn’t be directly on set all the time. So I decided to take this method and used it for a whole weekend of shooting, and I’d like to think that we got some pretty great results.  

1_log_1.2.1.jpg

redlogfilm

1_gamma_1.2.2.jpg

redgamma 3 delog in LiveGrade

1_look_1.2.3.jpg

with CDL generated in LiveGrade and applied in Scratch Lab.

2_look_1.21.3.jpg

redlogfilm

2_gamma_1.21.1.jpg

delog

2_look_1.21.2.jpg

with CDL

4_log_1.84.3.jpg

redlogfilm

4_gamma_1.84.1.jpg

delog

4_look_1.84.2.jpg

with CDL

5_log_1.100.1.jpg

redlogfilm

5_gamma_1.100.3.jpg

delog

5_look_1.100.2.jpg

with CDL

ADVANTAGES AND DISADVANTAGES

Remember, this is just one way of working, it’s not the best way overall. It’s sometimes not even possible to work constantly tethered. But it’s the way I prefer and feel like I do my best work like this. 

ADVANTAGES

Live decision making, which I find faster and more efficient. Everybody is looking at the same thing. Client confidence. 

Dual monitoring what you have in log and your lut. Seeing your log gives you a lot more info, and helps with color/lighting decisions. You see how much noise is really in your shadows, how close you are or if you are really clipping. Etc. 

You are part of the food chain, yaye! Being able to see what's being shot is essential. You can monitor the image all time and look for issues. I’ve done jobs where because being untethered is so accepted working with RED I haven't been directly on the set and can’t look at this footage until after the fact. If there was a problem, it’s usually way too late. 

Oddly enough, the most important aspect of all of this, and this might just be me. I've had a easier time creating looks via cdl method and delog over bringing r3d’s in, messing around with the tools in RCX and applying looks on top of redgamma 3, or starting fresh with redlog film. The delog sets a great starting point across the board, but when you manipulate color, you are on a CDL level. It’s pre linearization to redgamma. I find that you can a more elegant color adjustment method. It’s nearly the same principle as altering metadata first to get to a good place before you apply color. You zero in on camera iso and color temp, and having cdl betwen the redlogfilm and redgamma, to me, produced better results, faster. And if I”m just doing the color, I don’t need a expensive red rocket to see my sdi signal. It’s right off of camera. So if I’m not doing the dailies than I don’t even need a rocket on set anymore to do color for r3d if I’m going the redcine X route. Which is the only way to pass on rmd information down the food chain if you aren’t doing the dailies. 

DISADVANTAGES

You DO lose your metadata chain with color and now are relying on a lut based workflow. No good for smaller jobs that really don’t know this chain. It’s actually really great you can lace your color directly into the r3d and having it load up as metadata down the line. It locks in perfectly with Da Vinci Resolve and Scratch, As well other color software. 

Altering metadata first before doing color is I think, still a better way to work. It’s non destructive and really robust in how you work with your footage. Counter to that though,  if you have a really fat exposure and nail down the color temp, cdl method can just be as clean. But metadata advantage a valid point, especially if you want to use methods of dialing iso/ flut, fixing color temp, tint, etc. It can still be done with the CDL, it might just not always stay in the chain the way you want to. Not all dailies creation software is guaranteed to maintain the information iso, color temp, tint, etc through the pipeline inside of the r3d. Once you do it in RCX and save it, it stays with it the whole way. You can make those additional alterations as you get the footage in, but it gets back to the, not everybody is seeing on set what’s gonna be showing up in the dailies. 

You can’t put cdl and delog into RCX. RCX is free. Best price in town. So it puts your overhead in your investments higher. I’ve been a die hard Scratch Lab user this past year, and haven’t really used RCX much. You can apply this method in Resolve Lite, but you’ll be limited to HD resolution output only. Limits you in outputting higher res files, and iI’ve been asked on many jobs to output 4k pro res files for VFX as well. Also Resolve isn’t optimized with working with the rocket as fast. Best I got out of resolve with no additional color work with a rocket is about 15 fps or so. When I got 20+ from RCX or Scratch. And the investment in something like livegrade, hdlink, switchers, DA’s... but if you're already doing Alexa or anything else similar. You’ll likely have this stuff already. 

So the biggest disadvantage right here I wanted to save for last. The the current moment you can’t have different gamma outputs from your sdi to your onboard EVF or Touchscreen LCD on a RED. Doing “Dreams of the Last Butterflies”, the entire piece was shot on a AR rig. I monitored via Boxx wireless and did everything like that. The operator didn’t need redgamma on his monitor cause he just needed something to frame. So key point RIGHT NOW, is you can’t really use this method everyday. I’ve been lobbying to RED to let you set redgamma on the onboard screens, and redlogfilm on the sdi and hdmi independently. 

CONCLUSION

I’ll probably be doing the finishing color for “Dreams of the Last Butterflies”, and loading the delog LUT and cdl is going to simplify the process for me. It will go through a flame artist for a beauty pass and a few compositing shots, and I know that this method is just going make things easier. All I have to do is load in the LUTS, tweak what I need to for better matching if I didn’t do a good enough of a job the first time around, and a few secondaries on the footage to sweeten it up. The flame artists can load DPX log files and the LUT’s I made and versioning with that will look identical to the dailies. 

Is this workflow for everybody? Of course not. It's just another way you can work. Until RED offers up independent gamma selection for your signal outputs, this method hasn’t come to full fruition. But the combination of LiveGrade and Scratch Lab has opened up the potential for working with the camera in a more traditional way. Hopefully this functionality wlll end up on the next firmware release. 

NAB 2012 - Round Up

NAB 2012 - Round Up

I suspected that this year's show wouldn't be as overloaded with new product as last year because most of the major players have already rolled out the flagships that will carry them for the next few years. I found this to be somewhat the case but not entirely as there were definitely a handful of "show stoppers" on display. The emphasis this year seemed to be much less on new, groundbreaking wares and more "this is what we're working with now, and here's how we can do it better." 

While NAB is in many ways a portal of things to come, this year there were far fewer 3D announcements and a lot more emphasis on 4K which is evident in the current market as well. 3D has struggled to gain much, if any, traction outside of theatrical content and the resolution war is heating up now that all the major manufacturers are, or will be (IBC is next), intro-ing cameras offering greater than HD resolution. This is interesting because it's really not much different than the megapixel war with consumer digital cameras. Resolution while incredibly important is still relative to optics, image processing, presentation, and many other factors. Bigger isn't necessarily better though it's obviously a huge marketing opportunity for these vendors.

Maybe my interests have shifted somewhat as my market, broadcast bound projects, has decidedly settled on the Alexa for now. Because of this my energies are very tied up with solutions for that platform. That said, I didn't spend 3 days at the show exclusively checking out new cameras and hardware but spent much of that time researching workflow and archival solutions and demoing tons of new options for creating on-set deliverables, a topic I've covered at length on this site.

What was really excellent though wasn't all the new gack but the opportunity to meet in person so many people I've been in correspondence with. That's what's great about NAB - getting all these professionals from various facets of the industry together in the same location and the exchange of ideas and information that results. it's inspiring and I left Vegas feeling optimistic about the business and where it's going. 

Quick note, what I had on hand to shoot stills with this year was the trusty Leica M9 w/ Summilux-M 35mm Lens. My favorite camera in the world but defintiely not the best choice for shooting product closeups so I'll apologize for the the uninspired photographic component of this post. 

leicagram.jpeg

ALEXA. I was pleased to see Arri announce some very nice new features. Nothing earth shattering, but quietly useful. 

b_alexa_plus_5.jpeg

4:3 Sensor no longer exclusive to Alexa Studio:

You can now purchase an Alexa Plus with a 4:3 sensor in it. You cannot upgrade your existing camera to the new sensor which is certain to sour a few owners. However, this is nice because the Alexa Studio is a hefty rental and as neat as it is, I'm not entirely sold on the optical viewfinder. I'm not a camera operator though and a handful of my colleagues are really into it. I can definitely see the appeal. It will be great to have a more cost effective rental option for anamorphic capture or simply recording a big old square raster with spherical lenses for VFX work. The flexibility of the Alexa system continues to evolve along with the market. Ryan Koo wrote a good article on the topic >>>

2K ProRes Recording:

L1001486.jpg

Existing 16:9 Alexa sensors will soon be able to record to SxS cards in ProRes 4444 or DNx RGB at 2K resolution, 2048x1152. The new 4:3 sensor will be 2048x1536. Not a ton of extra resolution but appealing nonetheless. Also on the horizon - new debayer algorithm for improved sharpness and real time ArriRaw playback out of ArriRaw Converter.

newrasters.gif

Also at Arri's booth, Pomfort was there demoing their solution for Alexa color management, LiveGrade.

Pomfort's Patrick Renner

L1001482.jpg

I've written about this software at length and have been a beta user since day one. It's really come a long way and now that CDL and Pre or Post Linearization Color Correction has been implemented, LiveGrade is a legit on-set color management solution for any number of cameras. 

I think there is such a plethora of great NAB coverage I'm not going to spend the time creating a massive post covering all the big beats like I did last year. Here's a few things that stuck with me though - 

BLACKMAGIC DESIGNS:

I'm pretty excited about Resolve 9 but interestingly enough, this is the talk of NAB 2012 - the Blackmagic Cinema Camera aka "My First 2K", a $3000 camera that comes with $1700 of freebies (and I mean that in the MOST non-condescending way. I'm actually quite interested in this camera.. but c'mon look at it.. ViewMaster!)

L1001475.jpg
L1001474.jpg
photo-1.jpg

One thing that no one is talking about with this camera is that the sensor is quite small by today's standards, a bit more generous than Super 16. The mount is EF and these still lenses are going to be quite telephoto on this small sesnor. 3x more telephoto in fact so that super wide angle Canon 8mm is going to be about a 24mm in Full Frame terms. The other thing is the practical resolution of a Bayer pattern chip at 2432x1366 after demosaicing is a bit less than 1920x1080 with chroma subsampling around 4:2:0. That's just the nature of debayering but it does offer very robust recording formats, 12 bit Raw and Log encoded ProRes 4444 and DNx RGB. 

Resolve 9

L1001456.jpg

The users spoke and BMD clearly listened. Resolve 9 is now a full fledged dailies solution with the inclusion of audio pass-through and syncing, burn ins, super clean interface and media management, and intuitive new toolset. No word on whether the dailies component of 9 will be available in Lite or whether Lite will even continue to exist. I'm guessing you're going to have to shell out $1000 to have access to the new features which fair enough. Or you can just buy their camera and get it for free ;)

On the topic of dailies and on-set deliverables -

Everyone is getting into this game now. Assimilate was showing Scratch "The Next Thing" (working title) which is looking more powerful than ever and in my opinion Lab still offers the best cost to value ratio and user support. YoYotta was demoing realtime F65 rendering with Yo Dailies, ColorFront introed a low cost version of On-Set Dailies called Express Dailies, Filmlight has their low cost version Baselight Transfer, Adobe SpeedGrade CS6, etc. Not to mention a handful of software startups with their own offerings. Price tags on these wares run the gamut of course and each one offers its unique take on the complex problem of creating a dailies pipeline. Now that there are so many options, in my opinion the true separating factor will be support. The importance of having an actual human being to communicate with for troubleshooting, software customization, and feature requests can't be understated. Among this crop, some definitely understand this whereas others, maybe not so much. 

On a software related note - Autodesk Smoke all-in-one editing and effects package now for Mac. $15,000 $3500. Yet another once nearly unattainable pro tool looking to go mass market through aggressive pricing. 

SONY:

4k projection of a variety of material from the F65; all manner of conditions and mixed lighting. It was very good to see what this camera is actually capable of and it turns out, the potential is enormous. 

L1001471.jpg

4k can only be fully appreciated in a proper 4k projection. It's difficult to gauge the extra resolution on a HDTV or even one of the smaller 4k LCD displays that were floating around the show. Suffice to say, the image quality is remarkable. 

4k Stitch View:

L1001468.jpg

This is a very interesting application of 4k technology. 2 F65's side by side, both rasters are seamlessly stitched into one 8k picture that can you can pan and tilt around in realtime with no resolution loss until you get to 1080. It's applications like this, an unintended useful outcome of the technology, that really excites me about all this stuff. I think this technological renaissance we're experiencing in motion pictures can and should extend far beyond the realm of film/tv.  

Sony NEX-FS700:

L1001426.jpg

Everything about this has me scratching my head - from the form factor, to the generous specs (btw 250 fps at 1080p), the TBD 4k Raw recording, to the price ("less than $10,000). It's an odd one but it's a potentially very cool imaging machine nonetheless. 

CANON:

Canon EOS-1D C

canon-eos-1d-c-jjc.jpeg
photo-3.jpg

I'm way more excited about this than I thought I would be; the specs are out of control and the images coming out of the camera are really impressive. This is a true digital stills and motion picture camera. It's got the form factor of a SLR but all the video features you could ask for - clean output, multiple resolutions and sensor windows, multiple compresson schemes, etc. The 4k video isn't raw but is compressed to 422 at 500 Mbps and written out to compact flash. I didn't see any interface on the camera other than HDMI so I'm assuming a 4k raw recording via transport stream isn't possible. Regardless, I think this is THE camera for someone looking to do both high quality stills and video with one machine and not looking to spend a fortune on peripheral equipment. 

This cracked me up so I took a picture -

photo-2.jpg

ISO 204,800! On the monitor the video was looking super clean at 1600 but these days that isn't as special as it used to be. 

Canon C500:

photo-5.jpg

I think where the C300 was lackluster, this camera brings it. No one seems to know how these 4k streams will be recorded quite yet but Convergent Design is ready to accomodate whatever with the Gemini Raw. All these cameras, it's a little overwhelming. I think with all of these new acquistion options, a universal workflow is going to have to emerge or anything that comes out trying to reinvent the wheel is going to sunk before the ship even sails. Once again, we've just been spoiled by the ease of the Alexa. For large scale productions requiring a fast turnaround, vendors offering up something new need to make it as painless as possible or it's going to be a tough sell.

PANASONIC:

Behind glass and very difficult to photograph was this.. 4k "Varicam" Concept Camera.

photo-6.jpg

It's modular and comes in pieces like the EPIC and it's about the same size. Not much info to be gleaned other than AVC Ultra codec recorded to P2 Micro cards which are high capacity SD cards encased in stainless steel or some kind of tough alloy and the 4k recording is not Raw but rather Linear RGB. I'll reserve judgment but my instincts are leaning towards, "Too little. Too late."

SONNET AND THE TOPIC OF "MAC EXPANSION":

The consensus at the show regarding Apple's commitment to the pro market was grim to say the least. Even the future of 17" MacBook Pro has been called into question... I'm seriously about to start stockpiling computers. But you never know with Apple and that's the thing. They could announce something tomorrow and this discussion would be over. One has to maintain a cautious optimism with Apple products which is why I haven't started my stockpile just yet. I try and get as much mileage as I can on-set with 17" MBP's. I'll bring a tower out if I have to but my M.O. is usually to try and keep a small footprint and do a lot with a little. That said, I'm very excited about some of the stuff Sonnet is working on. Like the RackMac 1U shelf for Mac Mini's!

rack-min-2x.jpeg

And this kind of blew me away.. xMac mini Server

L1001480.jpg

xMac™ mini Server 1U Rackmount PCIe 2.0 Expansion System With Thunderbolt™ Ports

Sonnet’s Xmac™ mini Server (previously known as RackMac mini Xserver) 1U rackmount PCIe 2.0 expansion system with two Thunderbolt™ Ports mounts a Mac® mini inside a specially designed enclosure that also contains two x16 (x4 mode) PCIe 2.0 slots, a 150W power supply, and an installed Gigabit Ethernet card. This system enables users to plug in two PCIe 2.0 adapter cards (one half-length and one full-length) to slots connected to the Mac mini via locking Thunderbolt cables while allowing the connection of additional Thunderbolt peripherals to the daisy-chain Thunderbolt port.

A powerful and expandable computer that fits in your rack? This might be it. If Apple jettisons the Mac Tower, maybe they'll come out with a suped-up Mac Mini. Drop it in something like this and you're ripping. At least in theory ha.

Echo Express Pro Expansion Chassis for PCIe

These are a nice size.

echoexpress.jpeg

The Magma Thunderbolt ExpressBox 3T is a similar solution but it's a monster. The thing holds 3 PCie cards so it's nearly the size of a tower. You could get to a certain point where you're trying to make a laptop into something that it's just not. Is it worth it and is it really even feasible? At any rate, modularity is now the name of the game and I like to see lots and lots of viable options. 

9-7-2011eb3ttop.jpeg

AND ONE LAST THING:

I'm really running out of time for this post but this an item that's very cool and under the radar that I wanted to write about -

AXIS1 Single Channel Wireless Lens Control System:

L1001432.jpg

This impressively machined motor can be used for focus, iris, or zoom and the control is very nuanced. The range is similar to what you would get with a Preston. These are produced by a gentleman in the UK named Peter Hoare and the kit goes for about $5000. I've been looking for a solution for wireless Iris control and a couple of these might be it. 

That's all I've got for now. I'll try and revisit this post at a later date. 

Shot on EPIC

Shot on EPIC from Ben Cain / Negative Spaces on Vimeo.

EPIC Test Shoot @ Attic Studios 

ISO 640, 48fps, 180 Shutter, RedCode 5:1 

RED Epic-X in Stills Configuration, RED 18-50mm @ T3

Model: Andrea Grant

andreagrant.com/ 

Shooter: Ben Cain

negativespaces.com/ 

Editor: Nate Pommer - Super Collider Post

supercolliderpost.com/ 

Colorist: Tom Wong

thewongcut.com/main.html

Lighting Designer: Matt Hawkes

Art Director / Hair / Makeup: Heather Thomas 

Location: Attic Studios

atticstudios.net/ 

Special Thanks to:

Peter Clark, Matt Hawkes, Heather Thomas, and Attic Studios

Derek Nelson and B2Pro

RED Digital Cinema

Music is "The Pink Room" from Twin Peaks: Fire Walk With Me Soundtrack

NOTES FROM SHOOTER:

There is some flicker on the makeup table shots. I've never seen incandescents flicker @ 48 fps before but I suppose there's a first time for everything. The camera was in "Stills" mode meaning I was operating with the Interface Grip and pulling focus with the weight of the lens in my hand. Because of the small size of the camera, this isn't a good way to shoot motion with it. It really needs to go on your shoulder somehow. Once more of these are out in the field, I'll be curious to see the various configurations and what people come up with to get it in a more comfortable operating mode. 

NOTES FROM COLORIST:

So the workflow was developing the Epic r3d's, shot 5:1 at a lower iso to try to gain back more in the low end, and in redlogfilm for the max the image has to offer. The imagery we had, was very high contrast, probably almost a 6-7 difference from the lowest to just the midtones. A lot of blacks were clipped but that's fine considering it as a available light shoot, and the mood of the image is what we wanted anyway. I found that image was, it almost looks identical to the r1 mx to the eye, but with noticeably increased clarity and texture. We were working with 16 bit DPX, for uncompromised quality of the online, since Resolve doesn't take in Epic r3d's yet. I found the highlight roll off more gradual and more aesthetically pleasing than the r1. It felt more gentle over all, and had a wonderful organic sharpness to it, which is what RED is known for. Hyper sharp, but not in the edge enhanced kind of what. Brought to you by just the pure resolution of the sensor. 

The blacks definitely had more information in it, and the dynamic range in the low end is very nice. I was digging out certain areas of hair, while leaving the rest of the room dark, several nodes of contrast expansion and compression. The extra bitrate in the codec really helped a lot, as I wasn't afraid to mess with the blacks as much. Yes there was some noise to a certain extent with some of the more very extreme under exposed shots, since it was shot run and gun. So that level of noise on such a un-planned shoot is pretty remarkable for a sensor packing so many pixels. The noise is also very tight and small, and black and white which is aesthetically closer looking to film grain. Little better than what the r1 mx looks like to my eye.

Overall, I think it's the fact that the r3d has much less compression on it now that's making this more of a great time in the grade. The sensor itself for all intents and purposes is just like a r1 mx, just tweaked out to be better across the board. I think r3d is coming to full fruition now, and with the latest color science, it's looking more organic and natural than ever.