Advanced OLED Monitor Calibration

Advanced OLED Monitor Calibration

May 30, 2014

On negativespaces.com you’ll find numerous articles on the subject of monitor calibration. I’d say that of all the topics I’ve covered since 2007, this is the one that I've emphasized the most. Accurate monitoring is of paramount importance to the work we do as cinematographers, photographers, and technicians. Without a monitor we can trust, we don't have much as imaging professionals. Among the various display technologies currently available, OLED's have emerged as the gold standard for this.

I’d now like to dig a little deeper than I have in the past and present advanced information on OLED image processing and 3D LUT (Lookup Table) based calibration options for them. This article was made possible through collaboration with Bram Desmet from Flanders Scientific, Inc. (FSI) as well as input from Sony Broadcast OLED product manager, Gary Mandle. It's heavy on the subject of 3D LUT based calibration as this offers more control for the end user. Because FSI’s CM250 and CM172 professional OLED monitors allow for direct 3D LUT implementation, this article is somewhat specific to them. 

Disclaimer - I've owned and operated a good variety of product from both Sony and FSI. I'm in neither one camp nor the other and both company's have excellent offerings for the professional video community.

My previous writing on this topic detailed the calibration process for Sony OLED monitors using White Balance Adjustment. To recap, this process uses Sony's Auto White Balance Adjustment software and one of several supported probes to either manually or automatically adjust the monitor’s RGB gains and bias to arrive at a chroma-free white and dark gray point. 

Correct monitor white balance should ensure the reproduction of a neutral gray scale and accurate colors. If it doesn't, then there are calibration issues that cannot be resolved through white balance adjustment alone. In this case, a custom calibration 3D LUT could offer a solution. The advantage of 3D LUT based calibration is that gives the user total control; not just of white balance, but gamma and color gamut as well. This allows for greater user customization and can remedy display issues beyond what can be done through simpler means. 

Lookup Tables aren’t used solely for calibration and are actually an integral component of image processing in all professional monitors. An order of operations is employed between signal input and the final display. Somewhere in the signal chain a 3D LUT is used to transform the monitor's native, wide gamut to an output specific color space such as Rec. 709. All manufacturers do this a bit differently so there is no universal image processing sequence.

For example -

Sony uses a 1D LUT for each red, green, and blue color channel at the end of their processing chain to ensure uniformity from panel to panel.

In regards to Sony Trimaster EL OLED Monitors.

FSI's OLED monitors accomplish screen uniformity in a different way. Instead, at the beginning of the chain there's a panel-specific 1D LUT for white balance and gamma. This is followed by a panel-specific 3D LUT for color gamut. These user accessible LUT's combine adjustments to ensure panel uniformity along with transforms for specific color gamuts. 

Sony OLED's utilize 3D LUT's as well but users can't access them beyond basic white balance adjustments. For example, when Rec. 709 color space is selected on a Sony OLED monitor, a 3D LUT is being used to transform the native gamut to Rec. 709 but only white balance can be adjusted within this selection. The user has no real ability to adjust other aspects of the calibration. The only option for control beyond this is to put the Sony monitor into wide color gamut and then create a custom calibration 3D LUT to be implemented with external hardware. Despite this, Sony OLED's, BVM series in particular, have a strong track record of stable and accurate calibration for different color spaces achieved through white balance adjustment. 

FSI OLED monitors can be calibrated with white balance adjustment but also have a lot of other options including user access to the various LUT's used in the signal chain as well as directly loadable custom calibration 3D LUT's. 

For simplicity's sake, this article uses Rec. 709 for the example as it's the most widely used and well understood color space. DCI-P3 would not be as universal an example because it uses the different white point of x 0.314, y .351. Thus a monitor calibrated to DCI-P3 would not be useful for monitoring HDTV images and vice versa.

CIE 1931 chromacity diagram showing Rec. 709 Gamut

So what do we know about monitor calibration?

We know that a monitor is deemed calibrated when it's reproducing a neutral, chroma-free grayscale and accurate colors. This is done by ensuring the monitor's white point and RGB primaries are the same as specified within the color gamut (aka color space) in which we're working. A gamut is represented by the CIE 1931 chromacity diagram (above graphic), a two dimensional chart describing a wide color space divided into coordinates. As in the example above, the limits of Rec. 709's gamut are defined by the triangle; any color outside this cannot be reproduced. The three corners of the triangle define the RGB primaries, that is where 100% Red, 100% Green, and 100% Blue reside. The Red primary is at x .64 y .33, Green Primary at x .30 y .60, and Blue Primary at x .15 y .06. It also defines the 100% white point of this gamut at the coordinates x 0.3127, y 0.3290 (shorthand x .313, y .329). If our monitor is measuring a 100% white test signal at these correct coordinates, then it is at least properly white balanced. 

But what if it isn't? What if we white balance a monitor and it's not quite hitting the targets. Or maybe the Red, Green, or Blue primaries aren't hitting where they should. The only real solution is to create a custom calibration 3D LUT that specifically addresses these issues. A white balance adjustment is great place to start but in practice may not be able to address specific problems you may discover. 

It's very important to note that despite Rec. 709's white point coordinates of x .313 y .329, it's now recommended that for OLED Judd Modified Color Matching Function (aka Judd-Vos CMF) targets of x .307, y .318 be used instead. This CMF is adjusted from the CIE 1931 Standard Observer and is intended to match an OLED monitor to our standard reference, a D65 targeted CRT monitor. This is a complicated topic largely having to do with the phenomenon of Metamerism Failure which has found that different technologies will look different even if they measure exactly the same. This is a shortcoming of the CIE 1931 color science model to predict accurate metamers between all light sources for all viewers. This is not a specific fault of any probe or display, just color science limitations when dealing with devices that have different spectral distributions.

Because of this, the Judd-Vos targets are now widely viewed as being the "correct" ones for OLED displays despite there being no industry body adopted specification or even a recommendation for this. For the purposes of this article, the Judd-Vos targets of x .307 y .318 will be used. 

Here's where we get Flanders specific as these monitors are able to directly load user created calibration LUT's. 

FSI CM250 OLED Monitor

Order of Operations: 

There's a lot that happens between plugging a video cable into the monitor and what you ultimately see on it. Every manufacturer handles image processing differently but in the case of the FSI OLED monitors, this is signal chain. 

1. Signal Input >

2. User Loaded "DIT LUT", a 3D LUT using FSI's Color Fidelity Engine (CFE2) (optional and intended to be a "Look" LUT) >

3. 1D LUT for gamma and white balance and includes specific calibration for the panel. Can be turned on or off. A custom white balance adjustment affects this 1D LUT. >

4. Calibration 3D LUT for color gamut. It takes the panel's native, wide gamut to a target color space and includes specific calibration for the panel. Can be turned on or off. Preloaded for Rec. 709, EBU, SMPTE C, DCI-P3, USER 1, USER 2, and USER 3. When the monitor is in Wide Gamut mode, this LUT position is bypassed. Any of the USER positions can hold a custom calibration 3D LUT from CalMAN or LightSpace software which is implemented using FSI's Color Fidelity Engine (CFE2). >

5. Final Display.

Because any of these LUT positions can be turned on or off, a custom calibration can go in one of several positions. The order of operations is thus highly customizable which allows you to do technical transforms and calibrations at a higher degree of precision than simply having to concatenate everything into a single LUT. This flexibility also allows the user to implement "Look" LUTs without having to profile the monitor.

White Balance Adjustment:

As detailed in my previous articles on white balancing Sony OLED monitors, the FSI's can be adjusted in the exact same way. First, input a 100% white test signal into the monitor and using your probe of choice, adjust its Red, Green, and Blue gains until the probe confirms white is reading at the desired targets. Do the same thing using 20% gray test signal and adjust Bias until the same targets are hit. For Studio levels, our Luminance target for gain is 100 nits. Using the display gamma of 2.4, the Luminance target for Bias is 2.4 nits. These adjustments happen in the monitor's 1D LUT position. 

Setting up the Monitor for Judd-Vos:

FSI monitors include a 1D LUT to transform the standard CIE 1931 white balance to Judd-Vos Modified CMF targets. At the factory, FSI creates their Calibration 3D LUT using LightSpace software so if you were to put a new monitor into Judd Modified mode and then measure it, it should read at or close to x .307 y .318. If additional calibration is required, FSI recommends you do it with 1931 CIE Color Matching Function turned on and then switch to Judd Modified afterwards. This selection applies the correct Jodd-Vos white balance offset to the custom calibration.

Custom Calibration 3D LUT:

Why would you need to create your own calibration LUT? When is a white balance adjustment not enough? White balance is the primary thing that drifts on a display so it's the most logical place to begin your calibration. You may however discover issues that can't be corrected with it alone. For example, if you were to measure a monitor with a given probe and determine that the Red primary was a bit under where it should be for the color space selection, you would have no real way to adjust it. With access to a 3D LUT based calibration, you can adjust where that point lies based on your reference instrument. Similarly even if your probe comes up with a perfect measurement, you may find these primary coordinates drift over time and with no access to a LUT based calibration, there's little that can be done about it. 

Currently, these LUT's are made with third party software, either LightSpace CMS or SpectraCal CalMAN. These can be purchased bundled with the monitor along with different probe options directly from FSI. 

The process for creating calibration 3D LUT's is specific to the software / hardware you're using. This article is not intended to be a LightSpace or CalMAN tutorial as that information can be found on those company's websites. As an overview, the process is done by putting the monitor into its native, wide gamut, profiling it using the software, exporting the resulting LUT from the program, and then finally loading it into one of the monitor's 3D LUT positions. 

Direct Connect Automatic Alignment:

FSI monitors can also be automatically aligned using the Minolta CA-310 Colorimeter. No computer is required, just plug the probe into the monitor to automatically adjust the 1D LUT controlling gamma and white balance. 

Luminance Level and Flicker Free Mode:

With the nearly instant pixel response time of OLED panels, the longer the interval between frames, the more they flicker. This is inherent to this display technology and causes lower frame rates such 23.98 to strobe and pulse exactly as they would on a CRT monitor. FSI OLED's have something in common with the Sony PVM series in that they require an additional step in image processing to minimize this issue. The Flicker Free mode on the FSI is a double pulse method, which is not quite the same as simply doubling the frame rate but is close to how the PVM handles this process. The problem of low level clipping and luminance shift when using Flicker Free mode has been resolved through firmware on FSI monitors as it has on Sony's new PVMA250 and A170 monitors. Sony’s BVM series OLED’s are driven with more sophisticated electronics that eliminate this problem altogether by displaying 23.98 at 72 Hz. Virtually all of the motion imaging related and signal delay problems seen on the PVM and FSI monitors are eliminated by this process. Another advantage of better hardware is less calibration drift and more stability over long periods of time. 

It is recommended that all OLED monitors be measured and if need be, realigned at least every 6 months. 

RELATED ARTICLES:

HD Monitor Calibration - White Balance and Color Bars

Sony OLED Calibration part 1

Sony OLED Calibration part 2

ACES in 10 Minutes

ACES in 10 Minutes

May 17, 2014

Lately I’ve been taking a hard look at the Academy Color Encoding System aka ACES and trying to wrap my head around it. There are a handful of decent white papers on the topic but they tend to be overly technical. Through pulling tidbits from multiple sources, one can come to a decent understanding of the how’s and why’s of ACES but I was hoping to find some kind of an overview; something that presented the “need to knows” in a logical and concise way and wouldn’t require a big time commitment to understand. I did not find such a resource so I’m writing it myself.

Conceptually, I love ACES. In practice, I don’t have a terrible amount of hands-on experience with it. What I do know is from my own research and extensive testing in Resolve. It's quite brilliantly conceived but it seems to be catching on rather slowly despite established standards and practices and a good track record. Moving forward, I think as the production community meanders farther and farther from HDTV, ACES will emerge as the most appropriate workflow.

The principal developer of ACES is the Academy of Motion Picture Arts and Sciences (AMPAS).

“The Academy Color Encoding System is a set of components that facilitates a wide range of motion picture workflows while eliminating the ambiguity of today’s file formats.”

This description is vague but it seems to me that the Academy’s hope for ACES is to resolve two major problems.

1. To create a theoretically limitless container for seamless interchange of high resolution, high dynamic range, wide color gamut images regardless of source. ACES utilizes a file format that can encode the entire visible spectrum in 30 possible stops of dynamic range. Once transformed, all source material is described in this system in the exact same way.

2. To create a future-proof “Digital Source Master” format in which the archive is as good as the source. ACES utilizes a portable programming language specifically for color transforms called CTL (Color Transform Language). The idea is that any project mastered in ACES would never need to be re-mastered. As future distribution specifications emerge, a new color transform is written and applied to the ACES data to create the new deliverable.

The key to understanding ACES is to acknowledge the difference between “scene referred” and “display referred” images. 

A scene referred image is one whose light values are recorded as they existed at the camera focal plane before any kind of in-camera processing. These linear light values are directly proportional to the objective, physical light of the scene exposure. By extension, if an image is scene referred then the camera that captured it is little more than a photon measuring device. 

A display referred image is one defined by how it will be displayed. Rec.709 for example is a display referred color space meaning, the contrast range of Rec.709 images is mapped to the contrast range of the display device, a HD television.

Beyond this key difference, there are several other new terms and acronyms used in ACES. These are the “need to knows”. 

IDT: Input Device Transform:
Transforms source media into scene referred, linear light, ACES color space. Each camera type or imaging device requires its own IDT.

LMT, Look Modification Transform:
Once images are in ACES color space, the LMT provides a way to customize a starting point for color correction. The LMT doesn't change any image data whereas an actual grade works directly on the ACES pixels. An example of a LMT would be a "day for night" or "bleach bypass" look. 

RRT, Reference Rendering Transform:
Controlled by the ACES Committee and intended to be the universal standard for transforming ACES images from their scene referred values to high dynamic range output referred values. The RRT is where the images are rendered but it is not optimized for any one output format so requires an ODT to ensure it's correct for the specific output target.

ODT, Output Device Transform:
Maps the high dynamic range output of the RRT to display referred values for specific color spaces and display devices. Each target type requires its own ODT. In order to view ACES images on a broadcast monitor for example, they must first go through the RRT and then the Rec.709 ODT. 

The ACES RRT and ODT work together like a display or output LUT in a more conventional video workflow. For example when we use a Rec.709 3DLUT to monitor LogC images from an Arri Alexa. The combination of RRT and ODT is referred to as the ACES Viewing Transform. If you have ACES files, you will always need an ACES Viewing Transform to view them. An additional step to this would be to use an LMT to customize the ACES Viewing Transform for a unique look.

ACES Encoded OpenEXR:
Frame based file format used in ACES. It is scene referred, linear, RGB, 16 bit floating point (half precision) which allows 1024 steps per stop of exposure with up to 30 total stops dynamic range and a color gamut that exceeds human vision.

ACES Workflow utilizes a linear sequence of transforms to create a hybrid system that begins as scene referred and ends as display referred. It is a theoretically limitless space in which we work at lossless image quality (scene referred) but inevitably will need to squeeze into a smaller and more manageable space for viewing and/or delivery (display referred).

Any image that goes into ACES is first transformed to its scene referred, linear light values with an Input Device Transform or IDT. This IDT is specific to the camera that created the image and was written in ACES Color Transform Language by the manufacturer. This transform is extremely sophisticated and essentially deconstructs the source file, taking into account all the specifics of the capture medium, to return as close as possible to the original light of the scene exposure. Doing this allows multiple camera formats to be reduced to their basest, most universal state and allows access to every last bit of picture information available in the recording.

Big caveat - an ACES IDT requires viable picture information to do the transform so if it’s not there because of poor exposure then there's nothing that can be done to bring it back. Dynamic range cannot be extended and image quality lost through heavy compression cannot be restored. The old adage, “Garbage In, Garbage Out”, is just as true in ACES.

In ACES, scene referred images are represented as linear light which does not correspond to the way the human eye perceives light so are not practically viewable and must be transformed into some display referred format like Rec.709. This is done at the very end of the chain with the Output Device Transform, or ODT.

There are however a few more transforms that happen in-between the IDT and ODT, namely the LMT, Look Modification Transform, where grading and color correction happens and the RRT, Reference Rendering Transform, where the ACES scene referred values begin their transformation to a display / output format for viewing.

Because this process is very linear it should be fairly simple to explain in a diagram. Let’s try.

click to enlarge

I hope this overview is intuitive but in my desire to simplify, I easily could have overlooked important components. It's a very scientific topic and I'm coming at this from the practical viewpoint of a technician. I’m always open to feedback.

Anyone reading this who is interested in ACES I would encourage to join the ACES community forum at www.ACEScentral.com and read up on the latest developments and implementations of the project.

OFFICIAL ACES INFORMATION FROM THE ACADEMY OF MOTION PICTURE ARTS AND SCIENCES:

http://www.oscars.org/science-technology/sci-tech-projects/aces

http://www.oscars.org/science-technology/council/projects/pdf/ACESOverview.pdf

MORE RELATED ARTICLES:

http://www.finalcolor.com/acrobat/ACES_Nucoda%20r1_web.pdf

http://www.openexr.com/

http://www.poynton.com/w/ACES/

http://www.studiodaily.com/2011/02/is-justifieds-new-workflow-the-future-of-cinematography/

http://www.fxguide.com/featured/the-art-of-digital-color/

http://simon.tindemans.eu/essays/scenereferredworkflow

This article was updated 9/4/14 with input from Jim Houston, Academy co-chair of the ACES project. It was updated again on 4/2/17 with input from Steve Tobenkin from ACES Central

NAB 2014 Post-Mortem

NAB 2014 Post-Mortem

May 10, 2014

A NAB blog post one month after the show? Better late than never but this is pretty bad. So what’s left to say? Well in my opinion this year’s show was in a word, underwhelming. Among the countless new wares on display there was really only a handful that would stop you in your tracks with the freshness of their concept or utility. If my saying this comes off as waning enthusiasm then it might be true and I've been thinking a lot about why that is.

Not to point out the obvious but over the last 5 years monumental things have happened for the filmmaker. Within a very short span what was prohibitively expensive and difficult to achieve for 100 years of movie making became affordable and thus accessible to a whole new generation of artists. For the first time ever, anyone with a couple of grand could produce cinematic images and find an audience.

This was a two-fold revelation –

Manufacturing, imaging, and processing breakthroughs along with mobile technology facilitated high-quality, low-cost acquisition and postproduction and then through new social media avenues, a waiting pool of resources and viewers.

In 1979 on the set of Apocalypse Now, Francis Ford Coppola said, 

“To me, the great hope is that now these little 8mm video recorders and stuff have come out, and some... just people who normally wouldn't make movies are going to be making them. And you know, suddenly, one day some little fat girl in Ohio is going to be the new Mozart, you know, and make a beautiful film with her little father's camera recorder. And for once, the so-called professionalism about movies will be destroyed, forever. And it will really become an art form. That's my opinion.” 

This statement no doubt sounded ludicrous in 1979 but the sentiment of technology empowering art is a beautiful one.

Turns out he was right and it did happen, in a big way, and predictably these developments not only empowered artists but ignited an industry-wide paradigm shift. Over the course of the last decade, media has been on the course of democratization and it’s been a very exciting and optimistic time to be in the business. But here we are now in 2014, the dust has settled and the buzz has worn off a bit. It's back to business as usual but in our new paradigm, one defined by a media experience that's now digital from end to end and completely mobile. One where almost everyone is carrying around a camera in their pocket and being a “cameraperson” is a far more common occupation than ever before.

Because so much has happened in such a short time, it's now a lot harder for new technology to seize the public’s imagination like the first mass-produced, Raw recording digital cinema camera did. In the same vein, a full frame DSLR that shoots 24p video was a big deal. A sub $100k digital video camera with dynamic range rivaling film was a big deal. Giving away powerful color correction and finishing software for free was a big deal. I’m always looking for the next thing, the next catalyst, and with a few exceptions, I didn’t see much in this year’s NAB offerings. I predict more of the same in the immediate future – larger resolution, wider dynamic range, and ever smaller and cheaper cameras. This is no doubt wonderful for filmmakers and advances the state of the art but in my opinion, unlikely to be as impactful on the industry as my previous examples.

That said this is not an exhaustive NAB recap. Instead I just want to touch on a few exhibits that really grabbed me. New technology that will either -

A. Change the way camera / media professionals do their job.

B. Shows evidence of a new trend in the business or a significant evolution of a current one. 

Or both.

Dolby Vision

Dolby's extension of their brand equity into digital imaging is a very smart move for them. We've been hearing a lot about it but what exactly is it? In 2007 Dolby Laboratories, Inc. bought Canadian company, BrightSide Technologies, integrated their processes and re-named it Dolby Vision.

"True-to-Life Video

Offering dramatically expanded brightness, contrast, and color gamut, Dolby® Vision delivers the most true-to-life viewing experience ever seen on a display. Only Dolby Vision can reveal onscreen the rich detail and vivid colors that we see in nature."

It is a High Dynamic Range (HDR) image achieved through ultra-bright, RGB LED backlit LCD panels. Images for Dolby Vision require a different finishing process and a higher bandwidth television signal as it uses 12 bits per pixel instead of the standard 8 bits. This allows for an ultra wide gamut image at a contrast ratio greater than 100,000:1. 

Display brightness is measured in “candelas per square meter”, cd/m2 or “nits,” in engineering parlance. Coming from a technician's point of view where I'm used to working at Studio Levels, meaning my displays measure 100 nits, when I heard Dolby Vision operates at 2000-4000 nits, it sounded completely insane to me.

For context, a range of average luminance levels –

Professional video monitor calibrated to Studio Level: 100 nits
Phone / mobile device, laptop screen: 200-300 nits
Typical movie theater screen: 40-60 nits
Home plasma TV: >200 nits
Home LCD TV: 200-400 nits
Home OLED TV: 100-300 nits
Current maximum Dolby Vision test: 20,000 nits 
Center of 100 watt light bulb: 18,000 nits
Center of the unobstructed noontime sun: 1.6 billion nits
Starlight: >.0001 nit

After seeing the 2000 nit demo unit at Dolby’s booth, I now understand that display brightness at these high levels is the key to creating a whole new level of richness and contrast. It’s in fact quite a new visual experience and “normal” images at 100 nits seem quite muddy in comparison.

These demonstrations are just a taste of where this is going though. According to Dolby's research, most viewers want images that are 200 times brighter than today’s televisions. If this is the direction display technology is going then it is one that's ideal for taking advantage of the wide dynamic range of today's digital cinema cameras.

Because it poses a challenge to an existing paradigm, and even though there are serious hurdles, Dolby Vision is rich with potential so was for me the most interesting thing I saw this year's NAB show. It really got me thinking about what the ramifications would be for the cinematographer, camera and video technicians, and working on the set with displays this bright. It would require a whole new way of thinking about and evaluating relative brightness, contrast, and exposure. Not to mention that a 4000 nit monitor on the set could theoretically light the scene! This is a technology I will continue to watch with great interest. 

Andra Motion Focus

My friends at Nofilmschool did a great piece on this >>>

Matt Allard of News Shooter wrote this excellent Q & A on the Andra >>>

Andra is interesting because it's essentially an alternative application of magnetic motion capture technology. Small sensors are worn under the actor's clothing, some variables are programmed into the system, and the Andra does the rest. The demonstration at their booth seemed to work quite well and it's an interesting application of existing, established technology. It does indeed have the potential to change way lenses are focused in production but I do have a few concerns that could potentially prevent it from being 100% functional on the set. 

1. Size. It's pretty big for now. As the technology matures, it will no doubt get smaller.

Image from Jon Fauer's Film and Digital Times >>>

2. Control. Andra makes a FIZ handset for it called the ARC that looks a bit like Preston's version. It can also be controlled by an iPad but that to me seems impractical for most of the 1st AC's I know. In order for Andra to work, shifting between the systems automatic control and manual control with the handset would have to be completely seamless. If Auto Andra wasn't getting it, you would need to already be in the right place on the handset so that you can manually correct. It would have to be a perfectly smooth transition between auto and manual or I don't see this system being one that could replace current focus pulling methodology.

3. Setup time. Andra works being creating a 3D map of the space around the camera and this is done by setting sensors. A 30x30 space requires setting about 6 sensors apparently. Actors are also required to wear sensors. Knowing very well the speed at which things happen on the set and how difficult it can be for the AC's to get their marks, Andra's setup time would need to be very fast and easy. If it takes too long, it will quickly become an issue and then it's back to the old fashioned way - marks, an excellent sense of distance, and years of hard earned experience. 

Arri UWZ 9.5-18mm Zoom Lens

We associate lens this wide with undesirable characteristics such as barrel distortion, architectural bowing, and chromatic aberrations around the corners and frame edges. Because Arri's new UWZ Lens exhibits none of these characteristics it offers a completely fresh perspective for wide angle images. 

DaVinci Resolve 11

Now a fully functional Non-Linear Editor!

One potential scenario, imagine a world where all digital media could be reviewed, edited, fixed and enhanced, and then output for any deliverable in one single software. Imagine if said software was free and users at all levels and disciplines of production and post-production were using it. Just how much faster, easier, and cheaper would that make everything across the board from acquisition to delivery? Forget Blackmagic Design's cameras, Resolve is their flagship and what will guarantee them relevancy. It is the conduit through which future filmmakers will tell their stories.

Being a Digital Imaging Technician, I can't help but wonder though what will happen to on-set transcoding when perhaps in the near future, editors themselves are working in Resolve and are able to apply Lookup Tables and color correction to the native, high resolution media they're working with. 

Sony

Sony always has one of the largest booths and the most impressive volume of quality new wares at NAB. Being an international corporation with significant resources spread out over multiple industries, I think they've done a surprisingly good job of investing in the right R&D and have pushed the state of the art of digital imaging forward. A serious criticism however is they do a very poor job of timing the updates on their product lines. Because of this many of us Sony users have lost a lot of money and found ourselves holding expensive product with greatly reduced value as little as a year after purchase. Other than that, Sony continues to make great stuff and I personally have found their customer service to be quite good over the years. I always enjoying catching up at the show with my Sony friends from their various outposts around the world.

Sony F55 Digital Camera

The one thing that Sony has really gotten right is the F55. Through tireless upgrades, it has become the Swiss Army Knife of digital cinema cameras. One quick counter point, after seeing F55 footage against F65 footage at Sony's 4k projection, I have to say that I prefer the F65's image a lot. It is smoother and more gentle, the mechanical shutter renders movement in a much more traditionally cinematic way. It's sad to see that camera so maligned as the emphasis is now very much on the F55. Sony is constantly improving this camera with major features coming such as ProRes and DNxHD codes, extended dynamic range with SLog 3, 4k slow motion photography, and more. Future modular hardware accessories allow the camera to be adapted for use in a variety of production environments. 

Like the Shoulder-mount ENG Dock.

This looks like it would very comfortable to operate for those of use who came up with F900's on our shoulders. 

While this wasn't a new announcement, another modular F55 accessory on display at the show was this Fiber Adapter for 4k Live Production which can carry a 3840x2160 UHDTV signal up to 2000 meters over SMPTE Fiber. If the future of digital motion picture cameras is modular, then I think Sony has embraced it entirely with the F55. 

While F55 Firmware Version 4 doesn't offer as much as V3 did, 4k monitoring over HDMI 2.0 is a welcome addition as it's really the only practical solution at present. 4x 3G-SDI links poses serious problems and Sony is aware of this and has invested substantially in R&D for a 4k over IP - 10 gig ethernet solution

While it's difficult to discern what you're actually looking at in the below image, the 4k SDI to IP conversion equipment was on display at the show. 

If this technology could become small and portable enough that a Quad SDI to IP converter could live on the camera, your cable runs could be a single length of cheap Cat6 ethernet cable to the engineering station where it would get converted back to a SDI interface. This would solve the current on-set 4k monitoring conundrum. In the meantime, there really aren't a ton of options and Sony currently has only two 30" 4k monitors with 4x 3G-SDI interface that could conceivably be used on the set.

The PVM-X300 LCD which was announced last year and already has come down in price about 50%.

And the first 4k OLED, the Sony BVM-X300. While it's difficult to perceive 4k resolution on a display of this size, the image is gorgeous and will no doubt be the cadillac 4k professional monitors once it's out. Sony was being typically mum about the specifics so release date and price are currently unknown. 

Sony BVM-X300 4k OLED Professional Monitor. I apologize for the terrible picture.

Sony A7s Digital Stills and 4k Video Camera

I'll briefly touch base on the Sony A7s as I'm an A7r owner and have absolutely fallen in love with the camera. To those interested in how these camera stack up, the Sony Alpha A7, A7r, and A7s are all full frame, mirrorless, e mount, and have identical bodies.

The A7 is 24.3 MP, 6000x4000 stills, ISO 100-25,600, body only is $1698.

The A7r is the A7 minus the optical low pass filter and higher resolution 36.4 MP, 7360x4912 stills, ISO 100-25,600, body only is $2298.

The A7s is 12.2 MP, 4240x2832 stills, ISO 50-409,600, body only price is $2498. 

If anything, I think the A7s is indicative of an ever rising trend - small, relatively inexpensive cameras that shoot high resolution stills and video. I'm guessing that most future cameras after a certain price point will be "4k-apable". That doesn't mean I would choose to shoot motion pictures on a camera like this. When cameras this small are transformed into production mode, it requires too many unwieldy and cumbersome accessories. The shooter and/or camera department just ends up fighting with the equipment. I want to work with gear that facilitates doing your best work and in my experience with production, this is not repurposed photography equipment. 

Interestingly enough though despite this, the A7s seems to be much more a 4k video camera than a 4k raw stills camera. On the sensor level, every pixel in its 4k array is read-out without pixel binning which allows it to output over HDMI 8 bit 4:2:2 YCbCr Uncompressed 3840x2160 video in different gammas including SLog 2. This also allows for incredibly improved sensor sensitivity with an ISO range from 50 to 409,600. The camera has quite a lot other video-necessary features such as timecode, picture profiles, and balanced XLR inputs with additional hardware. The A7s' internal video recording is HD only which means that 4k recording must be done with some sort of HDMI off-board recorder. 

As is evidence from many wares at this year's show, if you can produce a small on-camera monitor then it might as well record a variety of video signals as well. 

Enter the Atomos Shogun. Purpose built for cameras like the Sony A7s and at $1995, a very impressive feature set. 

Hey what camera is that?

Shooting my movie with this setup doesn't sound fun but the Shogun with the A7s will definitely be a great option for filmmakers on micro budgets. 

One cool and unexpected feature of shooting video on these Sony cameras with the Sony e-mount lenses (there aren't many choices just yet) is that autofocus works surprisingly well. I've been playing around with this using the A7r and the Vario-Tessar 24-70mm zoom shooting 1080 video. The lens focuses itself in this mode surprisingly well which is great for docu and DIY stuff. I have to say I'm not terribly impressed with this lens in general though.

Sony Vario-Tessar T* FE 24-70mm f/4 ZA OSS Lens

It's quite small, light, and the auto focus is good but F4 sucks. The bokeh is sharp and jagged instead of smooth and creamy and it doesn't render the space of the scene as nicely as the Canon L Series Zooms which is too bad. Images from this lens seem more spatially compressed than they should. 

At Sony's booth I played around with their upcoming FE 70-200mm f/4.0 G OSS Lens on a A7s connected 4k to a PVM-X300 via HDMI. I was even less impressed with this lens not to mention quite a bit of CMOS wobble and skew coming out of the A7s. It wasn't the worst I've seen but definitely something to be aware of. This really should come as no surprise though for a camera in this class and even Sony's test footage seems to mindfully conceal it.

Pomfort's LiveGrade Pro v2

As a DIT, I'd be remiss if I didn't mention LiveGrade Pro. 

LiveGrade Pro is a powerful color management solution now with GUI options, a Color Temperature slider that affects RGB gains equally, stills grading, ACES, and support for multiple LUT display hardwares. Future features include a Tint Slider for the Green-Magenta axis nestled between Color Temp and Saturation. Right Patrick Renner? :)

Conclusion 

So what's the next big epiphany? Is it this?

What is Jaunt? Cinematic Virtual Reality.

Jaunt and Oculus Rift were apparently at NAB this year and had a demo on the floor. This writer however, was unable to find it. My time was unfortunately very limited but other than Jaunt and the invite-only Dolby Vision demo, I'm feeling like I saw what I needed to see. What will be uncovered at next year's show? More of the same? Or a few things that are radically new and fresh?