Copy
You're receiving this email because you opted in.
The Journal

 
The current issue of the SMPTE Motion Imaging Journal is now Available in the Digital Library.

Exclusive online peer-reviewed articles are available only in the Digital Library!
June 2016

Hot Button Discussion
Acquisition HDR
By Michael Goldman 

As the broadcast industry pivots into 4K UHD broadcasting, much of the chatter revolves around how best to transmit and manifest 4K signals so that high dynamic range capabilities are maximized on the first generation of 4K displays, as discussed in the May issue of NewswatchAfter all, HDR is all about increasing the range of brightness in images in order to boost contrast between the whitest and the blackest elements. However, the front end of the equation in terms of acquiring the image with the greatest dynamic range possible, to begin with, and then maintaining it all the way down the chain, is also evolving. Indeed, image capture for various types of broadcast applications, particularly live broadcasting, is being forced to factor in numerous changes and new considerations with the advent of UHD. Among them—how best to capture, utilize artistically, and then maximize the effect of greater dynamic range, and how best to combine higher dynamic range capabilities with other UHD improvements, including wider color gamut, higher frame rates, new scanning formats, and other considerations.
 
In the view of Peter Centen, vice president of research and development for Cameras at Grass Valley, a Belden brand, this paradigm illustrates a fundamental shift in how media technology development operates.
 
“In the early 1990s, we had our first HDTV [broadcast] camera, but the whole world was standard definition, on NTSC and PAL,” Centen relates. “We were all discussing 4 x 3 versus 16 x 9, and the world did not change overnight. It took something like 10 to 15 years to at least get some takeoff in transmission of HD in the wide aspect ratio of 16 x 9.
“Now, all of a sudden, we have 4K, mainly because display manufacturers saw the market going down for HD displays, and the easiest step forward for them was to copy and paste more pixels, because if you do four quadrants of HD, then you have UHD. The other option was to put higher dynamic range (HDR) [into HD displays], but that wasn’t as easily done by [display] manufacturers because that meant they would have to take measures to get more light out of displays to get better contrast. In my opinion, in terms of perception of sharpness for the human eye, if they had done so, people would have perceived contrast in such displays as resembling what we now call 4K, even if it would have still been HD and not 4K.
 
“And now, you see people wanting to make such changes within a year, and that is difficult to keep pace with. That fits into the society we are in at the moment—things are more personal, and people want everything more immediate. We have gone from a slow-changing technical world to a fast-changing technical world.” 

In the case of HDR in broadcast camera imaging systems specifically, however, the paradigm, in a sense, has been turned on its head, Centen suggests, because, for a long time, high-end HD broadcast camera systems from major manufacturers have been able “to support much higher dynamic range, but [HD] displays at the time were not able to cope with it,” he relates. “In the past, screens had a limited optical output—first to about 100 nits, then something like 200 or 400 nits, but the maximum brightness was always limited. Because of that, viewers looked at the displays with limited brightness, and were pleased by the images, because [content creators] adjusted their cameras in such a fashion, or in post by [colorists], to make them pleasing on those displays, making sure blacks, highlights, and mid-tones were correct.”
Now, new UHD displays are capable of much more in the dynamic range arena, Centen elaborates. “[Content creators] understand there is a lot of headroom with newer 2/3-in. CMOS imagers, which unlike CCD sensors, can go beyond 1,000 nits. A CMOS imager can go to a higher pixel count and higher frame rate, and still obtain good signal to noise ratios and good sensitivities for pleasing images. Since we now have [4K] displays that support at least 1,000 nits, sometimes 4,000 nits, and maybe some are going to even 10,000 nits, we can better support those displays and do not have to limit the capabilities of our imagers. So, manufacturers of imagers are taking pains to make sure their cameras can capture dynamic range that can fit the new displays. For live broadcasts, especially, that is important, because you simply capture the image and put it on the air, if you have a way to transmit it. In a cinema application, of course, they record all the bit depth, and then, afterward, go to post and prepare the material in whatever fashion they want to have it—they always have a storage stage and a delay stage in between. But for live broadcast applications, we have to address in real-time how colors and brightness are generated, what the color space should look like, and how that color should be mapped to [UHD displays] in our imagers.”

Camera manufacturers have therefore evolved sensors and corresponding software to make sure their systems can address not only how colors can be generated in standard Rec. 709 video color space, with what Centen calls “the proper gamma defined, nonlinear curves, which compensate for the non-linear curves in the display, with the added transfer curve or gamma knee point imposed on the higher exposure level to help cope with highlights,” but also how they can accomplish the same thing in the wider standard color space for UHD imagery known as Rec. 2020.
 
“Doing Rec. 2020 compared to Rec. 709 in the camera basically means that you get brighter colors,” he says. “So it is more saturated and pleasing for people to watch,and offers more possibilities for artificial lighting in a studio environment without generating false colors. A note of caution, though—using artificial lighting outside of Rec. 709 and within Rec. 2020 will pose problems when some cameras are [operating in] Rec. 709 and others in Rec. 2020. There is currently an EBU working group shared by the BBC on this topic.”
Centen points out that, in recent years, new electronic to optical transfer functions have been developed for allowing systems to convert optical data to a digital representation of a higher dynamic range of brightness in a way that the human eye and brain can understand with as little data loss as possible. Dolby, for example, developed its so-called PQ Curve or Perceptual Quantizer, which has since evolved into the SMPTE dynamic range electro-optical transfer function ST 2084 for high dynamic range displays, to enable a wider range of brightness values than traditional gamma curves, while minimizing the impact of contouring on modern 10-bit or 12-bit displays.
 
Then, the BBC and NHK jointly developed another methodology known as Hybrid Log-Gamma (HLG), which has been standardized as ARIB STD-B67 by the Association of Radio Industries and Businesses (ARIB). This approach assumes a relative brightness value, based on anticipated dynamic range and a peak white reference value, and is capable of being used as an image-capture curve. 
 
Centen calls this method “a bridge between how the gamma curve was defined in broadcast and how the knee was defined, and how it was felt the curve was needed to be adjusted to enable standard dynamic range on the one hand, and high dynamic range on higher dynamic range displays on the other hand. But that curve is not optimized for a minimum number of code values to allow for maximum contrast range,” meaning that function covers a somewhat narrower dynamic range than ST 2084.
Both approaches are now jointly being developed by Dolby, the BBC, and NHK into an HDR standard proposal for the ITU.
 
The point, however, for camera manufacturers is that their broadcast imaging systems ideally have to support all these approaches so that, creatively, users can choose whatever curves they want at the time of acquisition, and then “be able to tell all the system elements in the broadcasting chain what curve they used, and how you can make color corrections for that,” Centen says.
 
Of course, given the complexities of technology balanced with the creativity of content makers, there is more to it than that, because increasingly, particularly in the UHD era, filmmakers and broadcasters want to try more things creatively than for what standard interpretations are designed, Centen suggests. Therefore, “there will not always be a one-to-one correspondence,” he says, between the image captured and how content creators will want to map its brightness and other characteristics to a 4K monitor.
 
“Artistically, or from a viewer experience, it might be better to not have a full linear one-to-one correspondence between light in and light out,” he explains. “So, no matter whether you [set the camera] for PQ or HLG, you might still need an additional curve in your camera to permit artistic intent. Cameras need additional optical transfer curves so that you can further address artistic intent [on the front end] in terms of these characteristics. The other reason for having an additional curve in the camera is that when you go to blacks, with PQ and HLG on a small signal, a bit of noise can get amplified a lot. The differential gain of those special curves for high dynamic range can result in a bigger differential gain [than desired], and you must have the ability to modify that a bit.”
A modern broadcast camera, however, has to not only capture and support higher dynamic range and these other improved image characteristics, it has to be able to fit efficiently into what Centen calls “a new transmission chain” for delivering UHD broadcast imagery with better dynamic range after capture, all the way down the line so that what was captured gets transmitted to the viewer’s 4K display correctly, with as few errors as possible, representing what the content creator intended. Therefore, another important feature he says manufacturers need to take into account is to accommodate the new color space that is at the foundation of the Dolby Vision UHD standard. That format is called ITP-PQ (Intensity, Tritanope, Protanope). ITP-PQ is essentially a group of new color difference equations different from traditional YCbCr color space and built upon LMS—long-medium-short cone color responses of the retina of the human eye.
 
Centen adds other approaches are on the landscape for color space transformations. In particular, he points to some of the conclusions regarding how to get optimum color performance when combined with HDR as summarized in a presentation at IBC 2014, titled Deploying Wide Color Gamut and High Dynamic Range in HD and UHD.
 
The general idea, Centen says, is to reduce the amount of errors present in the transmission signal once you transmit imagery from the camera and into the broadcast chain, including errors within high dynamic range curves.

“The output is based on the output signal of your imager, and that signal then goes into the transmission chain,” he says. “That means there is then a kind of optical amplification in the signal, and with the luminance signal, the color difference—YCbCr—which is the standard definition for standard dynamic range, that can introduce small errors into the transmission. These are errors you would not normally see in the past because typical displays only went up to 100 nits. But if you go to larger amplitudes [with 4K displays], then you will find those small errors are intrinsically visible, because the system has optically multiplied them. So, to address that, we added the ability in the camera to switch between normal YCbCr that relates to Rec. 709 and normal YCbCr that relates to Rec. 2020. It’s a new color space where you choose the [LMS setting] in the camera, so it will automatically select ITP, and with that, give you a reduction in errors in the transmission system so that, in the end, you won’t have the artifacts visible that you would have had if you just stuck with Rec. 2020 and YCbCr with a regular high dynamic range curve. It’s about changing the matrixes for colorimetry and transmission.”
Center adds that the issue of the transmission chain is particularly important today, because of the looming arrival of the new ATSC 3.0 broadcast standard. The industry is currently testing various aspects of data transmission during the course of ongoing trials for the new standard, and one of the key issues is how well 4K/HDR content can be adjusted and travel through the ATSC 3.0 transmission chain, according to Centen. Grass Valley’s new 4K camera, the LDX 86, among others, he says, was slated at press time to be utilized in official ATSC 3.0 tests for that purpose. 
 
Centen also points out there are a host of other options available or in the works to help improve the ability to increase dynamic range visible in images viewed on 4K displays. Just as there are various software techniques to upscale content to 4K, he says “there are technologies available to upscale from SDR to HDR. Standard dynamic range material can be converted, and vice versa. There are several companies looking into this, and anything is possible in that regard.”
 
He says the industry also has the option of porting over larger, 35mm-size cinema-style sensors from the cinematography universe to the broadcast world, as Ikegami and Arri did when they introduced the HDK-97ARRI, which essentially placed an Arri Alexa camera head into a broadcast camera body in 2013. That approach, however, works for some applications, and not as well for live-action sports-type applications, he says. His reasoning is simply that “cinematography cameras all require large-format imagers, and that means you will always have shallow depth of field, which is what you like in cinematography,” he says. “But for live sports with lots of movement, among other things, it would be fairly hard to keep focus. But for studio work or sitcoms or episodic, areas more compatible with cinematography applications, it would be an interesting use.”
 
There are a host of other complicated issues related to this topic. Center gave a presentation on the subject of HDR imagers at the HPA 2015 Technical Retreat in Palm Springs, Calif., and you can see his presentation here
News Briefs

Small Drones Get FAA OK

Small unmanned aerial vehicles (UAV’s) recently received official approval from the FAA for commercial applications without requiring waivers, starting in August, as explained in this TV Technology article. The development was considered great news by the Hollywood filmmaking community, which has been using drones when granted waivers for a host of innovative cinematography applications recently. The approval was part of new rules issued by the FAA for operation of UAV’s by both business and government. In the commercial arena, which impacts media applications, drones weighing less than 55 pounds that are not being used by hobbyists are permitted for commercial use without first seeking waivers under particular conditions—pilots must be over 16 years old and have a remote pilot certificate with a small UAV rating or be supervised by someone who has such a certificate, and the drone can only be used in situations where it can be kept in line of sight at all times. The new rules also include particular height and speed limitations, and rules about limitations near populated areas. 
ASC Examines Cinematography Issues

The American Society of Cinematographers recently hosted a four-day conference for cinematographers from around the world, called the International Cinematography Summit. The event was not open to the public, but the ASC has posted comprehensive coverage of the technological and creative topics discussed at the many panels and events that took place. In particular, cinematographers debated the future of their job description on a landscape in which cameras can now capture imagery in 4K, 8K, stereo, and where they can frame and re-frame shots, alter focus, and change compositions in post-production. It was pointed out that some of these changes are leading to potential “crossover” between the job of the cinematographer and the job of the editor, where it is possible, but not necessarily a good idea, for people other than the cinematographer to make compositional or aesthetic decisions about photography at some point after image capture. Also debated was the drive by some in the industry to preserve a meaningful role and business model for the film medium; and the challenges and opportunities of cinematography’s role in the virtual reality realm.    
Blazing Fast Chinese Supercomputer

According to a recent Wired article, China recently debuted yet an even faster new supercomputer, called the Sunway TaihuLight, and it reportedly blows away previous speed records and puts China far ahead of the United States in the race for what the article calls “supercomputing primacy.” The article states that the new supercomputer’s theoretical peak performance is 125 petaflops, utilizing 10,649,000 cores and 1.31 petabytes of primary memory. The article also states that just 15 years ago, none of the fastest supercomputers in the world were Chinese, and now, all the top ones are, with the newest entry five times faster than anything produced thus far in the U.S. This is an important issue, experts quoted in the article suggest, not just for bragging rights, but for national security and scientific reasons, particularly in the era of cyber-warfare and global warming, as such technology can be used for space and weather system forecasting, life science, data analysis, and much more. 
SMPTE Centennial Fundraising Effort Continues to Grow


2016 marks the 100th Anniversary of SMPTE, and in the spirit of celebrating this notable milestone, The Next Century Fund was created. This Fund is a targeted, fundraising effort to invest in SMPTE's three pillars: Standards, Education, and Membership. To date, the Society has raised more than $1.8 million in committed gifts toward a $4 million goal. Most recently,  Google/YouTube, Fotokem, Blackmagic Design, and Bud Mayo were added to the list of the Next Century Fund donors. 

The Next Century Fund’s success is among the highlights of the centennial year, and the Society is grateful for the initial support of The Walt Disney Company, Panasonic, Dolby, Technicolor, Ross Video, Aspera, and individual donors Michelle Munson and Serban Simu, Leon Silverman, Wendy Aylsworth, Peter Wharton, Bill Miller, Ellen Sontag-Miller, and Andy Setos. Individual board members are among those that have helped to raise over $50,000 toward SMPTE's goal. This gesture is truly a testament to their commitment to the past, present, and future of SMPTE. In addition to The Next Century Fund, SMPTE is embarking on an Oral History Project and the development of a commemorative book that will chronicle the life of founder, C. Francis Jenkins, as well as many of members, and will take readers on a journey from 1916 to 2016, with focus on all of the innovation and exciting initiatives that SMPTE has led over the course of the past century.  

SMPTE's centennial celebration will culminate with the Centennial Gala, which will take place on Friday, 28 October in the Ray Dolby Ballroom at the Hollywood/Highland complex in Hollywood, California, as the grand finale to the SMPTE 2016 Annual Technical Conference & Exhibition. 

For more information, contact Mary Vinton, Director of Philanthropy at +1 914 205 2380 and learn more at www.smpte100.org

The views and opinions expressed are those of the author and do not
necessarily reflect the position of SMPTE.
You're receiving this email because you opted-in.

To customize your preferences about the email you receive from us, update your profile. Or, you may unsubscribe if you no longer wish to receive our emails. Please Note: If you unsubscribe, you will no longer receive ANY SMPTE or HPA email.

Society of Motion Picture and Television Engineers | 3 Barker Avenue | White Plains | NY | 10601