NAB 2014 Post-Mortem

NAB 2014 Post-Mortem

May 10, 2014

A NAB blog post one month after the show? Better late than never but this is pretty bad. So what’s left to say? Well in my opinion this year’s show was in a word, underwhelming. Among the countless new wares on display there was really only a handful that would stop you in your tracks with the freshness of their concept or utility. If my saying this comes off as waning enthusiasm then it might be true and I've been thinking a lot about why that is.

Not to point out the obvious but over the last 5 years monumental things have happened for the filmmaker. Within a very short span what was prohibitively expensive and difficult to achieve for 100 years of movie making became affordable and thus accessible to a whole new generation of artists. For the first time ever, anyone with a couple of grand could produce cinematic images and find an audience.

This was a two-fold revelation –

Manufacturing, imaging, and processing breakthroughs along with mobile technology facilitated high-quality, low-cost acquisition and postproduction and then through new social media avenues, a waiting pool of resources and viewers.

In 1979 on the set of Apocalypse Now, Francis Ford Coppola said, 

“To me, the great hope is that now these little 8mm video recorders and stuff have come out, and some... just people who normally wouldn't make movies are going to be making them. And you know, suddenly, one day some little fat girl in Ohio is going to be the new Mozart, you know, and make a beautiful film with her little father's camera recorder. And for once, the so-called professionalism about movies will be destroyed, forever. And it will really become an art form. That's my opinion.” 

This statement no doubt sounded ludicrous in 1979 but the sentiment of technology empowering art is a beautiful one.

Turns out he was right and it did happen, in a big way, and predictably these developments not only empowered artists but ignited an industry-wide paradigm shift. Over the course of the last decade, media has been on the course of democratization and it’s been a very exciting and optimistic time to be in the business. But here we are now in 2014, the dust has settled and the buzz has worn off a bit. It's back to business as usual but in our new paradigm, one defined by a media experience that's now digital from end to end and completely mobile. One where almost everyone is carrying around a camera in their pocket and being a “cameraperson” is a far more common occupation than ever before.

Because so much has happened in such a short time, it's now a lot harder for new technology to seize the public’s imagination like the first mass-produced, Raw recording digital cinema camera did. In the same vein, a full frame DSLR that shoots 24p video was a big deal. A sub $100k digital video camera with dynamic range rivaling film was a big deal. Giving away powerful color correction and finishing software for free was a big deal. I’m always looking for the next thing, the next catalyst, and with a few exceptions, I didn’t see much in this year’s NAB offerings. I predict more of the same in the immediate future – larger resolution, wider dynamic range, and ever smaller and cheaper cameras. This is no doubt wonderful for filmmakers and advances the state of the art but in my opinion, unlikely to be as impactful on the industry as my previous examples.

That said this is not an exhaustive NAB recap. Instead I just want to touch on a few exhibits that really grabbed me. New technology that will either -

A. Change the way camera / media professionals do their job.

B. Shows evidence of a new trend in the business or a significant evolution of a current one. 

Or both.

Dolby Vision

Dolby's extension of their brand equity into digital imaging is a very smart move for them. We've been hearing a lot about it but what exactly is it? In 2007 Dolby Laboratories, Inc. bought Canadian company, BrightSide Technologies, integrated their processes and re-named it Dolby Vision.

"True-to-Life Video

Offering dramatically expanded brightness, contrast, and color gamut, Dolby® Vision delivers the most true-to-life viewing experience ever seen on a display. Only Dolby Vision can reveal onscreen the rich detail and vivid colors that we see in nature."

It is a High Dynamic Range (HDR) image achieved through ultra-bright, RGB LED backlit LCD panels. Images for Dolby Vision require a different finishing process and a higher bandwidth television signal as it uses 12 bits per pixel instead of the standard 8 bits. This allows for an ultra wide gamut image at a contrast ratio greater than 100,000:1. 

Display brightness is measured in “candelas per square meter”, cd/m2 or “nits,” in engineering parlance. Coming from a technician's point of view where I'm used to working at Studio Levels, meaning my displays measure 100 nits, when I heard Dolby Vision operates at 2000-4000 nits, it sounded completely insane to me.

For context, a range of average luminance levels –

Professional video monitor calibrated to Studio Level: 100 nits
Phone / mobile device, laptop screen: 200-300 nits
Typical movie theater screen: 40-60 nits
Home plasma TV: >200 nits
Home LCD TV: 200-400 nits
Home OLED TV: 100-300 nits
Current maximum Dolby Vision test: 20,000 nits 
Center of 100 watt light bulb: 18,000 nits
Center of the unobstructed noontime sun: 1.6 billion nits
Starlight: >.0001 nit

After seeing the 2000 nit demo unit at Dolby’s booth, I now understand that display brightness at these high levels is the key to creating a whole new level of richness and contrast. It’s in fact quite a new visual experience and “normal” images at 100 nits seem quite muddy in comparison.

These demonstrations are just a taste of where this is going though. According to Dolby's research, most viewers want images that are 200 times brighter than today’s televisions. If this is the direction display technology is going then it is one that's ideal for taking advantage of the wide dynamic range of today's digital cinema cameras.

Because it poses a challenge to an existing paradigm, and even though there are serious hurdles, Dolby Vision is rich with potential so was for me the most interesting thing I saw this year's NAB show. It really got me thinking about what the ramifications would be for the cinematographer, camera and video technicians, and working on the set with displays this bright. It would require a whole new way of thinking about and evaluating relative brightness, contrast, and exposure. Not to mention that a 4000 nit monitor on the set could theoretically light the scene! This is a technology I will continue to watch with great interest. 

Andra Motion Focus

My friends at Nofilmschool did a great piece on this >>>

Matt Allard of News Shooter wrote this excellent Q & A on the Andra >>>

Andra is interesting because it's essentially an alternative application of magnetic motion capture technology. Small sensors are worn under the actor's clothing, some variables are programmed into the system, and the Andra does the rest. The demonstration at their booth seemed to work quite well and it's an interesting application of existing, established technology. It does indeed have the potential to change way lenses are focused in production but I do have a few concerns that could potentially prevent it from being 100% functional on the set. 

1. Size. It's pretty big for now. As the technology matures, it will no doubt get smaller.

Image from Jon Fauer's Film and Digital Times >>>

2. Control. Andra makes a FIZ handset for it called the ARC that looks a bit like Preston's version. It can also be controlled by an iPad but that to me seems impractical for most of the 1st AC's I know. In order for Andra to work, shifting between the systems automatic control and manual control with the handset would have to be completely seamless. If Auto Andra wasn't getting it, you would need to already be in the right place on the handset so that you can manually correct. It would have to be a perfectly smooth transition between auto and manual or I don't see this system being one that could replace current focus pulling methodology.

3. Setup time. Andra works being creating a 3D map of the space around the camera and this is done by setting sensors. A 30x30 space requires setting about 6 sensors apparently. Actors are also required to wear sensors. Knowing very well the speed at which things happen on the set and how difficult it can be for the AC's to get their marks, Andra's setup time would need to be very fast and easy. If it takes too long, it will quickly become an issue and then it's back to the old fashioned way - marks, an excellent sense of distance, and years of hard earned experience. 

Arri UWZ 9.5-18mm Zoom Lens

We associate lens this wide with undesirable characteristics such as barrel distortion, architectural bowing, and chromatic aberrations around the corners and frame edges. Because Arri's new UWZ Lens exhibits none of these characteristics it offers a completely fresh perspective for wide angle images. 

DaVinci Resolve 11

Now a fully functional Non-Linear Editor!

One potential scenario, imagine a world where all digital media could be reviewed, edited, fixed and enhanced, and then output for any deliverable in one single software. Imagine if said software was free and users at all levels and disciplines of production and post-production were using it. Just how much faster, easier, and cheaper would that make everything across the board from acquisition to delivery? Forget Blackmagic Design's cameras, Resolve is their flagship and what will guarantee them relevancy. It is the conduit through which future filmmakers will tell their stories.

Being a Digital Imaging Technician, I can't help but wonder though what will happen to on-set transcoding when perhaps in the near future, editors themselves are working in Resolve and are able to apply Lookup Tables and color correction to the native, high resolution media they're working with. 

Sony

Sony always has one of the largest booths and the most impressive volume of quality new wares at NAB. Being an international corporation with significant resources spread out over multiple industries, I think they've done a surprisingly good job of investing in the right R&D and have pushed the state of the art of digital imaging forward. A serious criticism however is they do a very poor job of timing the updates on their product lines. Because of this many of us Sony users have lost a lot of money and found ourselves holding expensive product with greatly reduced value as little as a year after purchase. Other than that, Sony continues to make great stuff and I personally have found their customer service to be quite good over the years. I always enjoying catching up at the show with my Sony friends from their various outposts around the world.

Sony F55 Digital Camera

The one thing that Sony has really gotten right is the F55. Through tireless upgrades, it has become the Swiss Army Knife of digital cinema cameras. One quick counter point, after seeing F55 footage against F65 footage at Sony's 4k projection, I have to say that I prefer the F65's image a lot. It is smoother and more gentle, the mechanical shutter renders movement in a much more traditionally cinematic way. It's sad to see that camera so maligned as the emphasis is now very much on the F55. Sony is constantly improving this camera with major features coming such as ProRes and DNxHD codes, extended dynamic range with SLog 3, 4k slow motion photography, and more. Future modular hardware accessories allow the camera to be adapted for use in a variety of production environments. 

Like the Shoulder-mount ENG Dock.

This looks like it would very comfortable to operate for those of use who came up with F900's on our shoulders. 

While this wasn't a new announcement, another modular F55 accessory on display at the show was this Fiber Adapter for 4k Live Production which can carry a 3840x2160 UHDTV signal up to 2000 meters over SMPTE Fiber. If the future of digital motion picture cameras is modular, then I think Sony has embraced it entirely with the F55. 

While F55 Firmware Version 4 doesn't offer as much as V3 did, 4k monitoring over HDMI 2.0 is a welcome addition as it's really the only practical solution at present. 4x 3G-SDI links poses serious problems and Sony is aware of this and has invested substantially in R&D for a 4k over IP - 10 gig ethernet solution

While it's difficult to discern what you're actually looking at in the below image, the 4k SDI to IP conversion equipment was on display at the show. 

If this technology could become small and portable enough that a Quad SDI to IP converter could live on the camera, your cable runs could be a single length of cheap Cat6 ethernet cable to the engineering station where it would get converted back to a SDI interface. This would solve the current on-set 4k monitoring conundrum. In the meantime, there really aren't a ton of options and Sony currently has only two 30" 4k monitors with 4x 3G-SDI interface that could conceivably be used on the set.

The PVM-X300 LCD which was announced last year and already has come down in price about 50%.

And the first 4k OLED, the Sony BVM-X300. While it's difficult to perceive 4k resolution on a display of this size, the image is gorgeous and will no doubt be the cadillac 4k professional monitors once it's out. Sony was being typically mum about the specifics so release date and price are currently unknown. 

Sony BVM-X300 4k OLED Professional Monitor. I apologize for the terrible picture.

Sony A7s Digital Stills and 4k Video Camera

I'll briefly touch base on the Sony A7s as I'm an A7r owner and have absolutely fallen in love with the camera. To those interested in how these camera stack up, the Sony Alpha A7, A7r, and A7s are all full frame, mirrorless, e mount, and have identical bodies.

The A7 is 24.3 MP, 6000x4000 stills, ISO 100-25,600, body only is $1698.

The A7r is the A7 minus the optical low pass filter and higher resolution 36.4 MP, 7360x4912 stills, ISO 100-25,600, body only is $2298.

The A7s is 12.2 MP, 4240x2832 stills, ISO 50-409,600, body only price is $2498. 

If anything, I think the A7s is indicative of an ever rising trend - small, relatively inexpensive cameras that shoot high resolution stills and video. I'm guessing that most future cameras after a certain price point will be "4k-apable". That doesn't mean I would choose to shoot motion pictures on a camera like this. When cameras this small are transformed into production mode, it requires too many unwieldy and cumbersome accessories. The shooter and/or camera department just ends up fighting with the equipment. I want to work with gear that facilitates doing your best work and in my experience with production, this is not repurposed photography equipment. 

Interestingly enough though despite this, the A7s seems to be much more a 4k video camera than a 4k raw stills camera. On the sensor level, every pixel in its 4k array is read-out without pixel binning which allows it to output over HDMI 8 bit 4:2:2 YCbCr Uncompressed 3840x2160 video in different gammas including SLog 2. This also allows for incredibly improved sensor sensitivity with an ISO range from 50 to 409,600. The camera has quite a lot other video-necessary features such as timecode, picture profiles, and balanced XLR inputs with additional hardware. The A7s' internal video recording is HD only which means that 4k recording must be done with some sort of HDMI off-board recorder. 

As is evidence from many wares at this year's show, if you can produce a small on-camera monitor then it might as well record a variety of video signals as well. 

Enter the Atomos Shogun. Purpose built for cameras like the Sony A7s and at $1995, a very impressive feature set. 

Hey what camera is that?

Shooting my movie with this setup doesn't sound fun but the Shogun with the A7s will definitely be a great option for filmmakers on micro budgets. 

One cool and unexpected feature of shooting video on these Sony cameras with the Sony e-mount lenses (there aren't many choices just yet) is that autofocus works surprisingly well. I've been playing around with this using the A7r and the Vario-Tessar 24-70mm zoom shooting 1080 video. The lens focuses itself in this mode surprisingly well which is great for docu and DIY stuff. I have to say I'm not terribly impressed with this lens in general though.

Sony Vario-Tessar T* FE 24-70mm f/4 ZA OSS Lens

It's quite small, light, and the auto focus is good but F4 sucks. The bokeh is sharp and jagged instead of smooth and creamy and it doesn't render the space of the scene as nicely as the Canon L Series Zooms which is too bad. Images from this lens seem more spatially compressed than they should. 

At Sony's booth I played around with their upcoming FE 70-200mm f/4.0 G OSS Lens on a A7s connected 4k to a PVM-X300 via HDMI. I was even less impressed with this lens not to mention quite a bit of CMOS wobble and skew coming out of the A7s. It wasn't the worst I've seen but definitely something to be aware of. This really should come as no surprise though for a camera in this class and even Sony's test footage seems to mindfully conceal it.

Pomfort's LiveGrade Pro v2

As a DIT, I'd be remiss if I didn't mention LiveGrade Pro. 

LiveGrade Pro is a powerful color management solution now with GUI options, a Color Temperature slider that affects RGB gains equally, stills grading, ACES, and support for multiple LUT display hardwares. Future features include a Tint Slider for the Green-Magenta axis nestled between Color Temp and Saturation. Right Patrick Renner? :)

Conclusion 

So what's the next big epiphany? Is it this?

What is Jaunt? Cinematic Virtual Reality.

Jaunt and Oculus Rift were apparently at NAB this year and had a demo on the floor. This writer however, was unable to find it. My time was unfortunately very limited but other than Jaunt and the invite-only Dolby Vision demo, I'm feeling like I saw what I needed to see. What will be uncovered at next year's show? More of the same? Or a few things that are radically new and fresh?

Luma and Waveforms

Luma and Waveforms

© 2009 NegativeSpaces (revised January, 2014)

It’s late. I’m jetlagged, sleepless, and sitting in a hotel room. There's nothing on TV but I’ve got a Sony EX3, a Leaderscope, and a 11 step grayscale chart. Let’s talk about video luminance, or “Luma” as it’s more correctly known.

I'm sure there's a more elaborate definition out there but this one from Wikipedia sums it up nicely.

“Luma represents the brightness of an image (the “black and white” or achromatic portion of the image). Gamma-Compressed Luma is paired with Chroma to create a video image. Luma represents the achromatic image without any color, while the chroma components represent the color information. Converting R'G'B' sources (i.e. the output of a 3CCD camera) into luma and chroma allows for chroma subsampling, enabling video systems to optimize their performance for the human visual system. Since human vision is more sensitive to luminance detail ("black and white", see Rods vs. Cones) than color detail, video systems can optimize bandwidth for luminance over color."

Luma is measured on a waveform monitor by either a millivolt (mV) or IRE (%) scale. IRE is intuitive because it represents the percentage of light to dark and for HD Video that scale is from 0-109. 0% brightness, 0 IRE, is black. 100% brightness, 100 IRE, is white. 109 IRE is super white (basically some head room for your highlights). Makes sense to me.

Luma together with Chroma make up the HD video image and for engineering purposes both can be manipulated individually with the correct menu settings. Color is controlled by the camera's LINEAR MATRIX, COLOR CORRECTION, RGB GAIN (white balance), RGB PEDESTAL (black balance) settings whereas Luma/Brightness is controlled with GLOBAL GAIN, PEDESTAL, GAMMA, BLACK GAMMA, and KNEE. Some cameras have slightly different nomenclature or menu features but these are the basic and universal controls for manipulating video Luma. Use these Luma Control tools to affect the image's Tonal Response or how Shadow, Mid Tone, and Highlight information is rendered. 

In my opinion, creating beautiful HD images starts with good lighting. These engineering tools aren’t going to make a poorly exposed and poorly lit image good but they can help make a good image great. Remember that for an 8 bit camera you’re trying to cram all the scene’s brightness information into 256 levels of gray. That’s not much to work with and if you’re dealing with extreme contrast, the menu can help but there are limits to what's practical. 

The Variables:

MASTER PEDESTAL most noticeably affects the bottom of the waveform or the black/dark/shadow portion of the signal. If you think about your waveform as if it’s an actual pedestal or column you’ll see that by increasing or decreasing the Pedestal value, you’re actually lifting or lowering the entire signal. The picture portion of the video signal can’t go below 0 IRE so by lowering the Pedestal or “crushing the blacks” as it’s often referred to, what you’re actually doing is compressing the dark picture information down to pure black, 0 IRE. Some cameras have individual RGB Pedestal controls as well which can be used to affect chrominance in shadows. 

GAMMA as a television engineering topic is a complex one. But an interesting one! Gamma as it applies to camera menu settings primarily affects the Mid-Tones by starting at the middle of the waveform scale and either lifting or lowering the information there, which by default can subtly affect the shadows and highlights as well. In addition to a controllable Gamma level, most video cameras have several preset Gamma options such as HIGH, LOW, HD, SD, CINE, etc. Some but not all of these pre-made Gamma curves actually are a combination of various Knee, Pedestal, and Gamma settings designed to create a specific effect such as wide dynamic range, crushed blacks and popping whites, an overall lifted look, etc. At the end of this article, I’ll provide an example of this.

BLACK GAMMA is for Gamma fine-tuning and controls the area in-between middle gray and black, about 10-40 IRE. Black Gamma is a great way to punch up the blacks without crushing them or to lighten the fill side of a scene.

KNEE is electronic highlight protection and controls the top of the waveform or the bright/white/highlight portion of the signal. Video cameras have traditionally struggled with highlights and clipping so Knee circuitry was designed to help overcome this inherent problem. Where you set your knee point or in other words, where you tell the camera to begin compressing the white portion of the signal will greatly affect the quality of your highlights. Knee is not a power window in a color correction suite though. Knee needs some IRE to work with and if you’ve already exceeded the limits of the sensor’s bit bucket, adjusting the camera’s knee circuitry is going to have little effect. The Knee features in a camera's Paint menu often has a lot of controls other than just where the compression begins - you can also inject or remove detail as well hue and saturation into highlights, etc. 

GLOBAL GAIN is an important part of the equation but gain is an overall video level that either boosts or reduces the entire signal which affects both Chroma and Luma. It's important to draw the distinction between global gain and RGB gain which is how camera white balance is controlled. 

If you haven't seen them yet, please watch Andy Shipsides’ ENG Essentials on Abel Cinetech's Blog. His video on Gamma Matching is a great resource and is closely related to the information found in this tutorial. That video is more of a how-to on the subject whereas this is intended to illustrate how the camera menu settings specifically affect the grayscale and its accompanying waveform.

normal_chart.jpg

For the purposes of this test, I started by zeroing out all of the Picture Profile settings on the Sony, turned the knee off, and used the Gamma Mode of STD1 to set up a basic image to set up the comparison. The Leader's waveform mode was set to Composite instead of Parade (individual RGB waveforms) to better show the signal in terms of luma only. The lens iris was set to a F2.8/4 split and not adjusted for each of the various menu settings so as to illustrate how to affect the image in the camera instead of introducing or taking away light.

I want to see how the menu settings affect the entire picture from 0-109 IRE (note from 2014, should have been 0-100 IRE!)  so first I need to make white, white and black, black. Initially with my lens at a F2.8/4 split, white just hit 109 IRE but the true black (the black rectangle in between the two grayscales) needed some help so I brought my Master Pedestal setting down to -14, setting it at 0 IRE. With this combination of Iris and Pedestal settings, the middle of the scale crosses at around 60 IRE and we have picture information from 0-109 IRE.

Here is our basic waveform that has picture information from 0-109 IRE. This will be the basis of comparison for the other menu setups. 

1normalw.jpg

And the accompanying properly exposed Grayscale image:

1normal.jpg

What's most important to know when using these tools to affect the image's tonality, is how to identify shadows, midtones, and highlights on a waveform monitor.

1normalw_ident.jpg

Let's have a look at what happens when we start altering our shadows by setting the the Pedestal. If you’ve been following this blog, you’ll know my thoughts on the mistake of arbitrarily crushing black. Here’s why:

PEDESTAL -50

2ped-50.jpg
2ped-50w.jpg

As you can see, the information that would have been residing from 0-20 % has been crushed down to 0 IRE, or in other words, 0% picture information. No amount of post production wizardry is going to get that information back without introducing lots and lots of noise. So if you like crushed blacks, make sure you like them enough to live with them. Some shots can definitely benefit from stong, inky blacks but just be aware of what you’re doing.

Here’s what happens when we lift the Master Pedestal.

PEDESTAL +50

2pedp50.jpg
2pedp50w.jpg

As I mentioned, Pedestal will raise or lower the entire video signal. While lowering it is a way to introduce contrast, raising it will take it away. Because lowering it crushes dark picture information down to black, it may seem like raising it would crush bright picture information into white. That’s not the case. As the Pedestal is raised the signal is compressed and black gets lighter and white gets grayer. Some DP’s will shoot with the Ped always a little lifted just so they can hang on to as much shadow and highlight detail as possible. This is great if you know there's going to be a DI or some grading done later. If this isn’t the case, you’ve really got to get it right in the camera and the flat look of lifted pedestal isn't the most attractive.

With the Gamma setting, we have some control over our mid tones. These controls are pretty subtle but can be a great way to quickly punch up or reduce the contrast without touching the blacks. This is more of a personal preference but I routinely drop the gamma a little bit to get bolder, richer picture.

GAMMA +99

3gammap99.jpg
3gammap99w.jpg

GAMMA -99

4gamma-99.jpg
4gamma-99w.jpg

As I mentioned above, Black Gamma is a great tool for subtly fine-tuning the dark portion of the signal and can punch up the blacks without crushing them.

BLACK GAMMA +99

5bgammap99.jpg
5bgammap99w.jpg

BLACK GAMMA -99

6bgamma-99.jpg
6bgamma-99w.jpg

And now we can affect the top portion of the signal, or White, by altering our Knee Point. Knee is a little deceptive because what it does is tell the camera where in the signal to start clamping down the highlight information. For example by lowering the number to 80, you tell the camera to start compressing the picture information from 80 IRE on. It works within the limits of the sensor though and information that is very bright and exceeds its limits of won't be noticeably affected. By turning the Knee off, you are not doing anything electronically to protect your highlights. As I mentioned above, Knee needs some IRE to work with and if you’ve already exceeded the limits of the sensor’s bit bucket, adjusting the camera’s Knee circuitry is going to have little or no effect. Some cameras have a White Clip setting as well which will not record any part of the signal past the value you specify. The Sony EX1 does not have a White Clip setting.

Here is the KNEE set to 100 which starts the compression at 100 IRE. As you can see, the highlights start to roll off at that point on the scale.

7knee100.jpg
7knee100w.jpg

Here is the KNEE set to 75

8knee75.jpg
8knee75w.jpg

And the lowest value on the EX1, KNEE 50

9knee50.jpg
9knee50w.jpg

Highlight rendering can be further subtly fine tuned with the Knee Slope control which controls the shape of the Knee curve. This affects how quickly the highlights are rolled off to pure white. Very subtle!

KNEE 100, KNEE SLOPE +99

10knee100sp99.jpg
10knee100sp99w.jpg

KNEE 100, KNEE SLOPE -99

11knee100s-99.jpg
11knee100s-99w.jpg

CAMERA MATCHING:

We can use these tools to match the gamma curves of different cameras to one another. Also, your camera's pre-made Gamma options - STD, CINE, etc - are really a combination of various Luma control settings. With skillful use of these tools, you can actually recreate any of these effects pretty closely or even design your own custom gamma curve.

In this example, I set the camera to CINE 2, the most aggressive of the pre-made gamma options on the EX1. I left the lens to a F2.8/4 and captured the waveform. You'll notice because this curve clamps the signal down so much, it's quite a bit darker than STD1.

Here it is:

match1.jpg
match1w.jpg

Next I zeroed out the settings out and reset the camera to Gamma STD1. Because CINE2 uses such aggressive compression, I had to close down half a stop on STD 1 to bring it within range. Using the menu, I adjusted the following settings to arrive at a fairly close match to CINE2, not perfect but pretty close.

PEDESTAL -11

BLACK GAMMA +12

KNEE 59

KNEE SLOPE -37

GAMMA -99

Here are the two curves superimposed over one another. Pretty close. With a little play in the iris, these pictures would be fairly well matched.

match_comp.jpg

And for kicks, to better illustrate how aggressive these pre-made gamma curves can be, here are the Cine curves from the Sony XDCAM-EX camcorder series (Iris is the same for each curve):

Cine 1

cine1_comp.jpg

Cine 2

cine2_comp.jpg

Cine 3

cine3_comp.jpg

Cine 4

cine4_comp.jpg

Please support this blog by leaving comments and feedback. It's only through user support and feedback that this content can be fine tuned so I always appreciate hearing from you. 

Painting HD Cameras - Skin Tones

Painting HD Cameras - Skin Tones

© 2009 NegativeSpaces (revised January, 2014)

In my experience color correcting video cameras in the field, 9 times out of 10 I’m trying to resolve some sort of skin tone issue – taking green out, bringing overly magenta skin back within a normal range, or sometimes just injecting a little bit of warmth and saturation. Knowing how to correctly use a video camera’s User Matrix menu and Color Correction menu as well as the Tonal Control menu is the key to working through these inevitable problems. In building upon my previous article on in-camera color correction for HDTV, this next article will specifically address how to use the various matrix menu attributes to affect skin tones.

This article builds off what was established in Painting HD Cameras - Basic Colorimetry. 

Technical Notes:

The images used in this article were created with a Panasonic HDX900 and the stills and vectorscope information were captured from a Leader LV 5330 Multi-SDI Monitor. Because a Panasonic camera was used, the workflow presented and menu features explained are those found on Panasonic cameras. The feature set on Sony cameras is similar enough though that I feel that if you know one system, you should be able to apply the same concepts to the other. The chip chart used was a DSC Labs CamBelles Chart. These charts are the standard for video engineering and camera alignment. Because the colors and values are so uniformly printed and tested, they can be measured electronically with repeatable results. Correct use of DSC Labs equipment can not only be used to calibrate and match equipment but to paint custom looks in the controlled environment of your studio.

On naming conventions: 

In most Panasonic cameras, the Linear Matrix is referred to as User Matrix and the Multi Matrix is referred to as Color Correction. In Sony cameras, Linear Matrix is referred to as Matrix Linear and the Multi Matrix is referred to Matrix (Multi). As this is a Panasonic oriented article, from here on out I'll be using the Panasonic nomenclature. 

Part 1: Overview

First to re-hash, there are six attributes that affect a video camera’s Linear Matrix: B-G, B-R, G-B, G-R, R-B, R-G. Those are read “Blue into Green, Blue into Red, etc.” Additionally, there are twelve Color Correction attributes we can modify: R, Mg, B, Cy, G, Yl, Yl-R, R-Mg, Mg-B, B-Cy, Cy-G, and G-Yl. For an in depth account of how these attributes work by pushing and pulling colors around the vectorscope, please refer to the previous tutorial. Using the handy DSC Labs Chroma Du Monde Chart with its 4 "generic skin tone" swatches, let's have a look at our camera's "out of the box", default colorimetry:

normal_chart_w_skintones.jpg
normal_vector_skin.jpg

Interestingly enough, virtually all human skin regardless of its hue or saturation resides somewhere within or nearby this red circle which for simplicity we'll call the "Skin Tone Region". The area resides along the I line on the Vectorscope and above the Q Line (see the intersecting lines on the graphic below). Where the Q Line crosses the I Line, skin tone saturation is at zero. The closer the skin tone information is to the boundary of the circle, the greater its saturation. Smart camera software such as Skin Tone Detail Circuitry knows to look within the Skin Tone Region and is thus able to isolate the information there to make independent adjustments. This is very helpful because it becomes easier to predict how the values are going to move around on the vectorscope as adjustments are made to the camera.

IQ-AXIS.jpg
skintone_region2.jpg

Now before we start playing, let's get a better idea of how these variables will affect actual human skin by using the DSC Labs CamBelles chart. Obviously sitting models would be better but for what it is, this chart is incredibly precise and I've used it to paint looks in the studio that have worked perfectly well in the field. 

The lovely ladies of DSC:

1normal.jpg

There is a good variety of skin tones here and the light in the scene is modeled enough that you can examine a good range of values. Also the fact that they're wearing bright clothes and are on a blue background helps to isolate the skin tones on the vectorscope.

Here's what they look like on the Vectorscope:

isolated_skintone.jpg

This isn't a tutorial on tonality but part of getting good colors means getting a good exposure. This is what my properly exposed and properly white balanced CamBelles look like on the waveform. 

1normalwfm.jpg

And if you have False Color on your monitor, you can use it to confirm your exposure:

1normalfc.jpg

Usually you want to keep it in the green-yellow zone for light skin tones and green-blue for dark. Orange is 80% which is where skin starts to break up so you definitely don't want your key light hitting that hard.

Skin tones can also be affected globally with Master Saturation Controls. Increased Saturation on the left and decreased Saturation on the right:

sat_comp.jpg

Part 2: User Matrix menu and skin tones

Typically you wouldn't use matrix adjustment to specifically affect skin tones as these are more global adjustments but it's good to see what the effect is. You're also hardly ever going to only use one of these adjustments. When creating a custom look, you'll most likely be pushing values around in all six menu options.

For example, let’s look at a side by side of the Cambelles when you put the G-B (Green into Blue) attribute at its maximum value, +63 on the left and its minimum value, -63 on the right:

b-r_example.jpg

As you can see, you’re never only affecting the skin tones. In your quest to render the perfect skin you’re also affecting plenty of other colors. It’s very easy to get caught in an endless cycle of color correction where you fix one thing only to create a new problem with another color. Only through trial and error and understanding the basic principles behind how in-camera color correction works will you be able to quickly execute the best solution.

Now let's have a look at both what happens to our skin tones when we adjust each of the user matrix variables:

B-G, BLUE INTO GREEN: 

On the left - Positive Value, In the middle - Default Value, On the right - Negative Value

blue-green.jpg

B-G +63 (increase in value)

B-G –63 (decrease in value)

b-g-63.jpg

B-R, BLUE INTO RED: 

On the left - Positive Value, In the middle - Default Value, On the right - Negative Value

blue-red.jpg

B-R +63 (increase in value)

b-rp63.jpg

B-R–63 (decrease in value)

b-r-63.jpg

G-B, GREEN INTO BLUE: 

On the left - Positive Value, In the middle - Default Value, On the right - Negative Value

green-blue.jpg

G-B +63 (increase in value)

g-bp63.jpg

G-B –63 (decrease in value)

g-b-63.jpg

G-R, GREEN INTO RED: 

On the left - Positive Value, In the middle - Default Value, On the right - Negative Value

green-red.jpg

 

G-R +63 (increase in value)

g-rp63.jpg

G-R –63 (decrease in value)

g-r-63.jpg

R-B, RED INTO BLUE: 

On the left - Positive Value, In the middle - Default Value, On the right - Negative Value

red-blue.jpg

R-B +63 (increase in value)

r-bp63.jpg

R-B –63 (decrease in value)

r-b-63.jpg

R-G, RED INTO GREEN: 

On the left - Positive Value, In the middle - Default Value, On the right - Negative Value

red-green.jpg

R-G +63 (increase in value)

r-gp63.jpg

R-G –63 (decrease in value)

r-g-63.jpg

Part 3: Color Correction menu and skin tones

Unfortunately I don't have CamBelles examples for working with the Color Correction menus. The attributes you'll be working with the most in regards to skin tones are the following three video colors: Red-Yellow (Yl-R), Red (R), and Yellow (Yl).

In the Color Correction menu set, we can isolate and modify the following twelve individual vectors: six primary video colors - Red (R), Yellow (Yl), Green (G), Cyan (Cy), Blue (B), and Magenta (Mg) and the six colors in between the primaries - Red-Magenta (R-Mg), Magenta-Blue (Mg-B), Blue-Cyan (B-Cy), Cyan-Green (Cy-G), Green-Yellow (G-Yl), and Yellow-Red (Yl-R). 

vectorscope2.jpg

As exemplified in the above graphic, the colors in and around these areas will be affected by their corresponding adjustments. To modify the Hue or Saturation of Red, use the "R" Color Correction attribute, for the colors in-between Yellow and Red, use "Yl-R", etc. 

yl-r.jpg

These Color Correction attributes are modified with a Phase and Saturation control. A negative Phase value (-) will move the color to the left on the vectorscope, a positive Phase value (+) will move it to the right. A negative (-) Saturation value will move the color closer to the center of the vectorscope, decreasing saturation and a positive (+) value will move it closer to the edge of the circle, increasing saturation. By altering the Phase on an individual color you are moving it out of alignment with other colors and reducing the amount of shades the camera can reproduce. Using these controls you can work on individual colors (such as skin tones) and subtly alter their hue and saturation but you still will affect any other color that contains the color you are modifying. The effect is far more subtle than the Linear Matrix adjustments, however is often necessary to arrive at a very specific hue or color saturation. Color correction in post production allows for a much finer degree of control so in some cases, it's best left to them. 

phase_sat.jpg

As mentioned in the previous article, you're very rarely only going to work with one attribute at a time. It's really understanding how they're all used together that's the key to good camera painting. Every task is different and there is no "one size fits all" approach. However I will Yl-R in Color Correction is often where I start when trying to inject some warmth and life into dull looking skin. Please support this blog by leaving comments and feedback. It's really only through user support and feedback that content can be fine tuned so I always appreciate hearing from you.