NAB 2014 Post-Mortem

NAB 2014 Post-Mortem

May 10, 2014

A NAB blog post one month after the show? Better late than never but this is pretty bad. So what’s left to say? Well in my opinion this year’s show was in a word, underwhelming. Among the countless new wares on display there was really only a handful that would stop you in your tracks with the freshness of their concept or utility. If my saying this comes off as waning enthusiasm then it might be true and I've been thinking a lot about why that is.

Not to point out the obvious but over the last 5 years monumental things have happened for the filmmaker. Within a very short span what was prohibitively expensive and difficult to achieve for 100 years of movie making became affordable and thus accessible to a whole new generation of artists. For the first time ever, anyone with a couple of grand could produce cinematic images and find an audience.

This was a two-fold revelation –

Manufacturing, imaging, and processing breakthroughs along with mobile technology facilitated high-quality, low-cost acquisition and postproduction and then through new social media avenues, a waiting pool of resources and viewers.

In 1979 on the set of Apocalypse Now, Francis Ford Coppola said, 

“To me, the great hope is that now these little 8mm video recorders and stuff have come out, and some... just people who normally wouldn't make movies are going to be making them. And you know, suddenly, one day some little fat girl in Ohio is going to be the new Mozart, you know, and make a beautiful film with her little father's camera recorder. And for once, the so-called professionalism about movies will be destroyed, forever. And it will really become an art form. That's my opinion.” 

This statement no doubt sounded ludicrous in 1979 but the sentiment of technology empowering art is a beautiful one.

Turns out he was right and it did happen, in a big way, and predictably these developments not only empowered artists but ignited an industry-wide paradigm shift. Over the course of the last decade, media has been on the course of democratization and it’s been a very exciting and optimistic time to be in the business. But here we are now in 2014, the dust has settled and the buzz has worn off a bit. It's back to business as usual but in our new paradigm, one defined by a media experience that's now digital from end to end and completely mobile. One where almost everyone is carrying around a camera in their pocket and being a “cameraperson” is a far more common occupation than ever before.

Because so much has happened in such a short time, it's now a lot harder for new technology to seize the public’s imagination like the first mass-produced, Raw recording digital cinema camera did. In the same vein, a full frame DSLR that shoots 24p video was a big deal. A sub $100k digital video camera with dynamic range rivaling film was a big deal. Giving away powerful color correction and finishing software for free was a big deal. I’m always looking for the next thing, the next catalyst, and with a few exceptions, I didn’t see much in this year’s NAB offerings. I predict more of the same in the immediate future – larger resolution, wider dynamic range, and ever smaller and cheaper cameras. This is no doubt wonderful for filmmakers and advances the state of the art but in my opinion, unlikely to be as impactful on the industry as my previous examples.

That said this is not an exhaustive NAB recap. Instead I just want to touch on a few exhibits that really grabbed me. New technology that will either -

A. Change the way camera / media professionals do their job.

B. Shows evidence of a new trend in the business or a significant evolution of a current one. 

Or both.

Dolby Vision

Dolby's extension of their brand equity into digital imaging is a very smart move for them. We've been hearing a lot about it but what exactly is it? In 2007 Dolby Laboratories, Inc. bought Canadian company, BrightSide Technologies, integrated their processes and re-named it Dolby Vision.

"True-to-Life Video

Offering dramatically expanded brightness, contrast, and color gamut, Dolby® Vision delivers the most true-to-life viewing experience ever seen on a display. Only Dolby Vision can reveal onscreen the rich detail and vivid colors that we see in nature."

It is a High Dynamic Range (HDR) image achieved through ultra-bright, RGB LED backlit LCD panels. Images for Dolby Vision require a different finishing process and a higher bandwidth television signal as it uses 12 bits per pixel instead of the standard 8 bits. This allows for an ultra wide gamut image at a contrast ratio greater than 100,000:1. 

Display brightness is measured in “candelas per square meter”, cd/m2 or “nits,” in engineering parlance. Coming from a technician's point of view where I'm used to working at Studio Levels, meaning my displays measure 100 nits, when I heard Dolby Vision operates at 2000-4000 nits, it sounded completely insane to me.

For context, a range of average luminance levels –

Professional video monitor calibrated to Studio Level: 100 nits
Phone / mobile device, laptop screen: 200-300 nits
Typical movie theater screen: 40-60 nits
Home plasma TV: >200 nits
Home LCD TV: 200-400 nits
Home OLED TV: 100-300 nits
Current maximum Dolby Vision test: 20,000 nits 
Center of 100 watt light bulb: 18,000 nits
Center of the unobstructed noontime sun: 1.6 billion nits
Starlight: >.0001 nit

After seeing the 2000 nit demo unit at Dolby’s booth, I now understand that display brightness at these high levels is the key to creating a whole new level of richness and contrast. It’s in fact quite a new visual experience and “normal” images at 100 nits seem quite muddy in comparison.

These demonstrations are just a taste of where this is going though. According to Dolby's research, most viewers want images that are 200 times brighter than today’s televisions. If this is the direction display technology is going then it is one that's ideal for taking advantage of the wide dynamic range of today's digital cinema cameras.

Because it poses a challenge to an existing paradigm, and even though there are serious hurdles, Dolby Vision is rich with potential so was for me the most interesting thing I saw this year's NAB show. It really got me thinking about what the ramifications would be for the cinematographer, camera and video technicians, and working on the set with displays this bright. It would require a whole new way of thinking about and evaluating relative brightness, contrast, and exposure. Not to mention that a 4000 nit monitor on the set could theoretically light the scene! This is a technology I will continue to watch with great interest. 

Andra Motion Focus

My friends at Nofilmschool did a great piece on this >>>

Matt Allard of News Shooter wrote this excellent Q & A on the Andra >>>

Andra is interesting because it's essentially an alternative application of magnetic motion capture technology. Small sensors are worn under the actor's clothing, some variables are programmed into the system, and the Andra does the rest. The demonstration at their booth seemed to work quite well and it's an interesting application of existing, established technology. It does indeed have the potential to change way lenses are focused in production but I do have a few concerns that could potentially prevent it from being 100% functional on the set. 

1. Size. It's pretty big for now. As the technology matures, it will no doubt get smaller.

Image from Jon Fauer's Film and Digital Times >>>

2. Control. Andra makes a FIZ handset for it called the ARC that looks a bit like Preston's version. It can also be controlled by an iPad but that to me seems impractical for most of the 1st AC's I know. In order for Andra to work, shifting between the systems automatic control and manual control with the handset would have to be completely seamless. If Auto Andra wasn't getting it, you would need to already be in the right place on the handset so that you can manually correct. It would have to be a perfectly smooth transition between auto and manual or I don't see this system being one that could replace current focus pulling methodology.

3. Setup time. Andra works being creating a 3D map of the space around the camera and this is done by setting sensors. A 30x30 space requires setting about 6 sensors apparently. Actors are also required to wear sensors. Knowing very well the speed at which things happen on the set and how difficult it can be for the AC's to get their marks, Andra's setup time would need to be very fast and easy. If it takes too long, it will quickly become an issue and then it's back to the old fashioned way - marks, an excellent sense of distance, and years of hard earned experience. 

Arri UWZ 9.5-18mm Zoom Lens

We associate lens this wide with undesirable characteristics such as barrel distortion, architectural bowing, and chromatic aberrations around the corners and frame edges. Because Arri's new UWZ Lens exhibits none of these characteristics it offers a completely fresh perspective for wide angle images. 

DaVinci Resolve 11

Now a fully functional Non-Linear Editor!

One potential scenario, imagine a world where all digital media could be reviewed, edited, fixed and enhanced, and then output for any deliverable in one single software. Imagine if said software was free and users at all levels and disciplines of production and post-production were using it. Just how much faster, easier, and cheaper would that make everything across the board from acquisition to delivery? Forget Blackmagic Design's cameras, Resolve is their flagship and what will guarantee them relevancy. It is the conduit through which future filmmakers will tell their stories.

Being a Digital Imaging Technician, I can't help but wonder though what will happen to on-set transcoding when perhaps in the near future, editors themselves are working in Resolve and are able to apply Lookup Tables and color correction to the native, high resolution media they're working with. 

Sony

Sony always has one of the largest booths and the most impressive volume of quality new wares at NAB. Being an international corporation with significant resources spread out over multiple industries, I think they've done a surprisingly good job of investing in the right R&D and have pushed the state of the art of digital imaging forward. A serious criticism however is they do a very poor job of timing the updates on their product lines. Because of this many of us Sony users have lost a lot of money and found ourselves holding expensive product with greatly reduced value as little as a year after purchase. Other than that, Sony continues to make great stuff and I personally have found their customer service to be quite good over the years. I always enjoying catching up at the show with my Sony friends from their various outposts around the world.

Sony F55 Digital Camera

The one thing that Sony has really gotten right is the F55. Through tireless upgrades, it has become the Swiss Army Knife of digital cinema cameras. One quick counter point, after seeing F55 footage against F65 footage at Sony's 4k projection, I have to say that I prefer the F65's image a lot. It is smoother and more gentle, the mechanical shutter renders movement in a much more traditionally cinematic way. It's sad to see that camera so maligned as the emphasis is now very much on the F55. Sony is constantly improving this camera with major features coming such as ProRes and DNxHD codes, extended dynamic range with SLog 3, 4k slow motion photography, and more. Future modular hardware accessories allow the camera to be adapted for use in a variety of production environments. 

Like the Shoulder-mount ENG Dock.

This looks like it would very comfortable to operate for those of use who came up with F900's on our shoulders. 

While this wasn't a new announcement, another modular F55 accessory on display at the show was this Fiber Adapter for 4k Live Production which can carry a 3840x2160 UHDTV signal up to 2000 meters over SMPTE Fiber. If the future of digital motion picture cameras is modular, then I think Sony has embraced it entirely with the F55. 

While F55 Firmware Version 4 doesn't offer as much as V3 did, 4k monitoring over HDMI 2.0 is a welcome addition as it's really the only practical solution at present. 4x 3G-SDI links poses serious problems and Sony is aware of this and has invested substantially in R&D for a 4k over IP - 10 gig ethernet solution

While it's difficult to discern what you're actually looking at in the below image, the 4k SDI to IP conversion equipment was on display at the show. 

If this technology could become small and portable enough that a Quad SDI to IP converter could live on the camera, your cable runs could be a single length of cheap Cat6 ethernet cable to the engineering station where it would get converted back to a SDI interface. This would solve the current on-set 4k monitoring conundrum. In the meantime, there really aren't a ton of options and Sony currently has only two 30" 4k monitors with 4x 3G-SDI interface that could conceivably be used on the set.

The PVM-X300 LCD which was announced last year and already has come down in price about 50%.

And the first 4k OLED, the Sony BVM-X300. While it's difficult to perceive 4k resolution on a display of this size, the image is gorgeous and will no doubt be the cadillac 4k professional monitors once it's out. Sony was being typically mum about the specifics so release date and price are currently unknown. 

Sony BVM-X300 4k OLED Professional Monitor. I apologize for the terrible picture.

Sony A7s Digital Stills and 4k Video Camera

I'll briefly touch base on the Sony A7s as I'm an A7r owner and have absolutely fallen in love with the camera. To those interested in how these camera stack up, the Sony Alpha A7, A7r, and A7s are all full frame, mirrorless, e mount, and have identical bodies.

The A7 is 24.3 MP, 6000x4000 stills, ISO 100-25,600, body only is $1698.

The A7r is the A7 minus the optical low pass filter and higher resolution 36.4 MP, 7360x4912 stills, ISO 100-25,600, body only is $2298.

The A7s is 12.2 MP, 4240x2832 stills, ISO 50-409,600, body only price is $2498. 

If anything, I think the A7s is indicative of an ever rising trend - small, relatively inexpensive cameras that shoot high resolution stills and video. I'm guessing that most future cameras after a certain price point will be "4k-apable". That doesn't mean I would choose to shoot motion pictures on a camera like this. When cameras this small are transformed into production mode, it requires too many unwieldy and cumbersome accessories. The shooter and/or camera department just ends up fighting with the equipment. I want to work with gear that facilitates doing your best work and in my experience with production, this is not repurposed photography equipment. 

Interestingly enough though despite this, the A7s seems to be much more a 4k video camera than a 4k raw stills camera. On the sensor level, every pixel in its 4k array is read-out without pixel binning which allows it to output over HDMI 8 bit 4:2:2 YCbCr Uncompressed 3840x2160 video in different gammas including SLog 2. This also allows for incredibly improved sensor sensitivity with an ISO range from 50 to 409,600. The camera has quite a lot other video-necessary features such as timecode, picture profiles, and balanced XLR inputs with additional hardware. The A7s' internal video recording is HD only which means that 4k recording must be done with some sort of HDMI off-board recorder. 

As is evidence from many wares at this year's show, if you can produce a small on-camera monitor then it might as well record a variety of video signals as well. 

Enter the Atomos Shogun. Purpose built for cameras like the Sony A7s and at $1995, a very impressive feature set. 

Hey what camera is that?

Shooting my movie with this setup doesn't sound fun but the Shogun with the A7s will definitely be a great option for filmmakers on micro budgets. 

One cool and unexpected feature of shooting video on these Sony cameras with the Sony e-mount lenses (there aren't many choices just yet) is that autofocus works surprisingly well. I've been playing around with this using the A7r and the Vario-Tessar 24-70mm zoom shooting 1080 video. The lens focuses itself in this mode surprisingly well which is great for docu and DIY stuff. I have to say I'm not terribly impressed with this lens in general though.

Sony Vario-Tessar T* FE 24-70mm f/4 ZA OSS Lens

It's quite small, light, and the auto focus is good but F4 sucks. The bokeh is sharp and jagged instead of smooth and creamy and it doesn't render the space of the scene as nicely as the Canon L Series Zooms which is too bad. Images from this lens seem more spatially compressed than they should. 

At Sony's booth I played around with their upcoming FE 70-200mm f/4.0 G OSS Lens on a A7s connected 4k to a PVM-X300 via HDMI. I was even less impressed with this lens not to mention quite a bit of CMOS wobble and skew coming out of the A7s. It wasn't the worst I've seen but definitely something to be aware of. This really should come as no surprise though for a camera in this class and even Sony's test footage seems to mindfully conceal it.

Pomfort's LiveGrade Pro v2

As a DIT, I'd be remiss if I didn't mention LiveGrade Pro. 

LiveGrade Pro is a powerful color management solution now with GUI options, a Color Temperature slider that affects RGB gains equally, stills grading, ACES, and support for multiple LUT display hardwares. Future features include a Tint Slider for the Green-Magenta axis nestled between Color Temp and Saturation. Right Patrick Renner? :)

Conclusion 

So what's the next big epiphany? Is it this?

What is Jaunt? Cinematic Virtual Reality.

Jaunt and Oculus Rift were apparently at NAB this year and had a demo on the floor. This writer however, was unable to find it. My time was unfortunately very limited but other than Jaunt and the invite-only Dolby Vision demo, I'm feeling like I saw what I needed to see. What will be uncovered at next year's show? More of the same? Or a few things that are radically new and fresh?

Sony OLED Calibration part 2

Sony OLED Calibration part 2

January 4, 2014

After updating Part 1 of this article, I felt an independent followup was in order. I would say that monitor calibration is at the center of the ever evolving craft that has come to be known as "DIT". This being the relatively new space we occupy, somewhere amorphously between production and post but never completely one or the other. This short article is intended to expand on and clarify some of the points of the previous one. 

This little bit of nomenclature alone I find interesting as many production people I work with seem to be under the impression that my cart is the "DIT" and that I'm the "DIT Tech" or the "DIT Operator". I've even occasionally found this is how I'm listed on the call sheet. Most of the readers of this site I would assume are aware that this acronym refers to the person working the cart and not the cart itself.  D.I.T. = Digital Imaging Technician, a human being, not an unwieldy pile of video and computer gack on wheels. However, these loosely thrown around three letters have come to mean something completely different depending on who you're talking to. 

So in DIT, or for DIT's, or however you prefer to frame it; being able to trust what you're seeing on your monitor is one of the most important components of our craft. You have to be confident that the digital images you're working with are being faithfully represented. This is accomplished through calibration.

As in the previous article, to calibrate the Sony Trimaster EL OLED monitors I'm using the X-Rite i1 Pro or Pro 2 Spectrophotometer and Sony Automatic White Balance Software. There are of course many other spectrophotometer options available but this is one that's affordable and that I've come to trust. 

Sony Automatic White Balance Software

Sony Automatic White Balance Software

X-Rite i1 Spectrophotometers

X-Rite i1 Spectrophotometers

WHITE BALANCE ADJUSTMENT:

100% White Test Video

100% White Test Video

The first thing I'd point out when calibrating the Sony PVM series OLED's is that they are far trickier than their big brothers the BVM's. The RGB Gain and Bias as well as the Brightness, Contrast, and Chroma adjustments are much coarser than those found in the more expensive monitors. Because of this lack of subtlety, it's difficult if not impossible to perfectly hit our OLED calibration targets of x .307 and y .318 on the PVM's.

While I think the PVM's are excellent displays and priced appropriately, I have found small differences and inconsistencies from panel to panel. Let's be realistic, if these monitors were intended to be used as a true reference, the purpose of the higher end BVM's would be displaced and sales cannibalized. The PVM's when calibrated can come very close in terms of color and tonality but are not a true reference and I don't believe were ever intended to be.

While I think the quality control at Sony is good, in the case of the PVM which is not the premium line, uniformity can't be expected to be as stringent. Even on monitors that may not be able to hit x .307 and y .318 perfectly, you will always be able to get very close, for example x .308 y .318 or x .307 y .319. This will yield an "acceptable" result and most users will be hard pressed to see much, if any difference between a monitor reading these numbers and one coming up perfect. I realize that this information may not please Sony but after measuring scores of these monitors, this is the conclusion I've come to. 

BIAS ADJUSTMENT:

20% Gray Test Video

20% Gray Test Video

The issue stated above is even more problematic with Bias calibration on the Sony PVM's. For whatever reason, it's difficult if not impossible to get a straight reading for Bias using a 20% neutral gray video signal. At least not using the X Rite probes with the Sony White Balance software. The numbers of the probe reads tend to jump around between 2 to 3 decimal places. Because of this, it's difficult to tell if you're actually aligning. I've found that if you can get the numbers within a decimal point or two of the targets, the gray video will look very neutral. Between x .306-.308 and y .316-319 has proven to be acceptable. And once again, I've encountered certain monitors that will hit the targets pretty much spot on whereas others will not. Why this specific instability? Looking for answers.

Neither of these issues are as apparent in the Sony BVM OLED monitors. These OLED panels are of the highest quality and extremely stable. If nothing less than total accuracy is required then stick with BVM monitors and a probe like the Klein K-10. If "close enough" works for you and your clients, the PVM's and X-Rite have proven to be sufficient. And by "close enough" I mean "entirely acceptable" for most us working in the field. Beyond the calibration and panel quality differences between the two lines, be aware there are also video signal processing differences that are most evident in the way motion is rendered. The BVM's employ a more sophisticated Interlace to Progressive (I/P) Conversion with less delay and smoother motion. 

BEST CALIBRATION PRACTICE FOR SONY OLED MONITORS:

1. Follow the instructions in this article to automatically white balance your Sony monitor >>>

2. Input 100% white test video into the monitor after the Automatic White Balance is complete. If it is not reading the targets x .307, y .318, manually adjust RGB gains until they are. 

3. Input 20% gray test video into the monitor and do the same process for Bias if necessary. 

RELATED ARTICLES:

HD Monitor Calibration - White Balance and Color Bars

Sony OLED Calibration part 1

Color Correction

Color Correction

May 19, 2013

This short article began with the rather dry title, "Tracking CDL Through Post." As I began to write, my thoughts started to meander into the obvious and maybe not so obvious ancillary aspects of this topic. I now feel the more gestalt title, "Color Correction", actually seems appropriate. And please forgive me if this post comes off as very "101".

I do a fair job of keeping up with the blogs, (RIP Google Reader, enter Feedly.) Among the many great sites out there on the topic I'm always reading about software and hardware tools, plug-ins, look building, and the technique of color correction but very little about why it's necessary in the first place.

So why do we color correct?

Forget building Looks for a moment. And by that I mean digitally crafting the visual qualities - color, saturation, and contrast - of a shot instead of doing it the old fashioned way - with optics, exposure, and photochemistry. At it's very root, color correction is about camera matching and shot matching within a scene so as not to take the viewer out of the moment with an abrupt change in visual continuity. And this, more than anything else is defined by color temperature.

A definition - Correlated Color Temperature (wikipedia):

The particular color of a white light source can be simplified into a correlated color temperature (CCT). The higher the CCT, the bluer the light appears. Sunlight at 5600K for example appears much bluer than tungsten light at 3200K. Unlike a chromaticity diagram, the Kelvin scale reduces the light source's color into one dimension. Thus, light sources of the same CCT may appear green or magenta in comparison with one another [1]. Fluorescent lights for example are typically very green in comparison with other types of lighting. However, some fluorescents are designed to have a high faithfulness to an ideal light, as measured by its color rendering index (CRI). This dimension, along lines of constant CCT, is sometimes measured in terms of green–magenta balance;[1] this dimension is sometimes referred to as "tint" or "CC".

533px-PlanckianLocus.png

Every camera sensor, every lens, in-front-of-the-lens filter, light source (most particularly the sun and sky!), and light modifier will produce or interpret Reference White (Chroma Free) in a slightly different way. In the case of lenses, something like Master Primes are remarkably well matched within the set whereas older glass like Zeiss Superspeed Mk III's, for example, will have a lot of color temperature and even contrast shift from lens to lens. This being the case, we can say there is a significant amount of color temperature offset to contend with between all of our light producing and image re-producing devices.

Here's a 50mm Ultra Prime on an Arri Alexa with camera white balance set at 3300 -3, lens iris at T2.8. Below it is an Angeniuex Optimo 24-290mm zoom lens @ 50mm put on the same camera with the same exposure and white balance.

RGAR311338.jpg
ultra-clean.jpg
optimo-clean.jpg
angeniuex_optimo_rental_seatles600x600.png

The Optimo Zoom lens (bottom image) is much warmer and greener than the prime. If these lenses were both working in the same setup, color correction instantly becomes necessary, lest one angle looks "correct" and the other either too warm or too cool in comparison.

All of these variables - optics, light sources, sensors, etc - and their inherently different color temperatures often add up to cameras that don't match and shots within the same scene that are offset from one another along the warm-cool and green-magenta axis.

In this era of high ISO cameras, color temperature offsets are most evident in heavy Neutral Density filters, often used to block as much as 8 stops. In my opinion, heavy ND's are the most destructive variable in contemporary digital imaging. Even with the best available filters such as the Schneider Platinum IRND's, we're still seeing a lot of color temperature offsetting with filters over 1.2. The problem is it seems that most Neutral Density filters (either conventional or with Infrared Cut) do not retard Red, Green, and Blue wavelengths of light in equal proportions. What we're left with after reducing a lot of stop with ND is more blue and green wavelength than red which is vital to the accurate reproduction of skin tones. If this part of the picture information has been greatly reduced, it can be very challenging to digitally push life and warmth back into the subject's skin without introducing a lot of noise and artifacting.

Here's the 50mm Ultra Prime again.

ultra-clean.jpg

And here with a Platinum IRND 1.2. The camera ISO, white balance and exposure are the same. To get the stop needed to compensate for the ND, the quantity of the light was increased on the chart by bringing it closer to not affect its color temperature by dimming or scrimming.

nd12.jpg

Comparing the two, they're really quite close. I've got to say, the Schneider Platinum's are the best I've found. With other sets of IRND's, you'll see significant color temp offset even at ND .3 but with these at ND 1.2, there is only a very slight shift to green. But this is still something that will need to be corrected.

Here's IRND 1.5. We're starting to get increasingly cool and green.

nd15.jpg

IRND 1.8

nd18.jpg

IRND 2.1

nd21.jpg

And for comparison, back to our original, filter-less image.

ultra-clean.jpg

After depriving the image of 7 stops of light with Neutral Density, we've unintentionally reduced some of our red channel picture information. At this point we can correct with camera white balance by swinging the camera to a warmer Kelvin and pulling out a little green. Or we can use digital color correction tools like LiveGrade at the time of photography or DaVinci Resolve in post production to match this shot with the scene. ND filters are but one variable among many when it comes to managing color temperature offsets spread across the camera and lighting.

Fortunately, there are numerous ways to deal with it.

In my opinion, these offsets can usually be solved most expediently with Camera White Balance (WB). Depending on the camera and how we're doing the recording, this WB setting is either "baked in" to the image or exists as metadata. In the case of the Arri Alexa, the orange-cyan (warm-cool) axis is represented in degrees of kelvin with green-magenta adjustable in "+" or "-" points of color correction.

alexa_WB.jpg

If you're working with the RED camera, the Redmote is great for wirelessly adjusting white balance when you need to.

redmote.png

Wireless remote operation of the Alexa is a desperately needed feature. The best we can do for now is the Arri RCU-4 better known as the "Assistant Panel".

rcu_4.jpg

This is a great little device that's chronically underutilized as it gives you full remote access to the camera unlike the Web Browser ethernet interface which is very limited. The RCU-4 is powered through its control cable which I've used successfully at lengths up to 150'. This device makes white balancing the Alexa incredibly fast and efficient as it no longer need be done at the side of the camera.

Not to get too obvious with this.. Moving on.

Another approach is to manage color temperature by putting color correction gel - CTB, CTO, CTS, Plus Green, Minus Green - on light sources in order to alter those with undesirable color temperatures to produce the correct, color accurate response. Color correction tools, digital or practical, do not necessarily apply to the creative use of color temperature. Having mixed color temperatures in the scene is an artistic decision and one that can have a very desirable effect as it builds color contrast and separation into the image. Mixed color temperatures in the scene will result in an ambient color temperature lying somewhere in between the coolest and warmest source. Typically in theses scenarios, a "Reference White", or chroma-free white can be found by putting the camera white balance somewhere around this ambient color temperature.

Identifying problematic light sources and gelling them correctly can be a very time and labor intensive process and one that doesn't happen on the set as often as it should so is usually left up to the digital toolset. There is now a whole host of affordable softwares that can be used on the set at the time of photography like LiveGrade or LinkColor or later in post production - such as Resolve, Scratch, Express Dailies, and countless others.

When we're talking about On-Set Color Correction, we're usually talking about ASC-CDL or Color Decision List. CDL is a very useful way to Pre-Grade or begin color correction at the time of the photography. This non-destructive color correction data is very trackable through post production and can be linked to its corresponsing camera media through metadata with an Avid ALE. When implemented successfully, the Pre-Grade can be recalled at the time of finishing and be used as a starting point for final color. In practice, this saves an enormous amount of time, energy, and consequently.. $$$.

Here's one way an ALE with the correct CDL information can be generated in Assimilate Scratch Lab:

In the top level of Scratch, here's our old friend the Chip Chart. Hooray!

scratch_top.jpg

We've applied the standard Alexa Log to Video 3DLUT to these shots and as you can see, the first one looks pretty good but the rest suffer from various degrees of Color Temperature Offsetting.

s1.jpg
s2.jpg
s3.jpg
s4.jpg

At this point, if we Pre-graded on the set, we could load the correct CDL for each shot and be ready to output dailies.

In the bottom lower left on the Matrix page, is the LOAD button. Click it to go to this dialog window:

load_cdl.jpg

Here CDL from the set can be applied on a shot by shot basis. Once everything is matching nicely it's time to embed this work into metadata that can easily be tracked and recalled at a later time.

CDL_Export1.jpg

Select +CDL and click "Export EDL/ALE"

cdl_ale.jpg

From the drop-down, select .ale, and then name your ALE something appropriate.

Now in Avid Media Composer, we're going to import this ALE to add ASC-CDL Slope, Offset, Power, and Sat (Gamma, Lift, Gain, and Saturation) values that will now be associated with their corresponding clips.

This post assumes a working knowledge of Media Composer. If you're not sure how to set up an Avid project, import media, make bins, and import an ALE, there are plenty of great tutorials out there.

Once you have the transcoded DNxHD media in the correct MediaFiles directory, import the ALE.

choose_columns.jpg

Click the "Hamburger" Icon in the lower left of the bin (I have no idea what this Selector tool is actually called but I've heard many an Assistant Editor refer to it as the Hamburger), and then select "Choose Columns".

bin_columns.jpg

Here we have the opportunity to select which columns show up in our bin. The ASC-CDL values are already embedded in the ALE we imported but it's a good idea to verify them which we can do at the bin level by turning on these columns. From the "Choose Column" drop-down, select ASC_SOP (Slope, Offset, Power) and ASC_SAT (Saturation).

asc_sop_sat.jpg

As you can see, all of the adjustments we made as CDL are now reflected in numeric values and are linked to their corresponding shot in the form of Avid metadata. ASC-CDL, while unfortunately limited in a lot of ways, really is a fairly univeral interchange for color correction data and can be implemented quite easily.

What we really need is a way to recall these ASC-CDL values from the ALE in a software like LiveGrade making this color correction data even more interchangeable.

Another possible workflow is to generate the dailies in Resolve using CDL from the set. Once that CDL corresponds with a shot in Resolve, that CDL can track with its correct shot all the way to finishing if the original Resolve project(s) is used.

What's the best approach? All of the above. The right tool for the right task and no two projects are alike. That's why a DIT is hired in the first place, to consider the criteria and then advise the best course of action. 

Update

on 2013-06-06 14:55 by Ben Cain

Just read this related article -

http://www.hdvideopro.com/technique/miscellaneous-technique/help-desk-getting-it-white-the-first-time.html?utm_medium=referral&utm_source=pulsenews&start=1

Content feels eerily familiar!