Viewers of The Hurt Locker probably had no idea of the contributions made in post to the final look of the six-time Oscar-winning film. Subtle touches in digital grading, like the saturation of the red wires in the bombs and the warmth of the sand on the Iraqi battlefield, helped to underscore the excitement and tension of Sgt. William James’s (Jeremy Renner) world. Images from his home life, skewed colder with a flattened contrast, emphasized the soldier’s ennui at being stateside.
Such creative flairs have been part of Stephen Nakamura’s toolbox since he moved over from grading television in the late 1990s. His work on the 3D feature The Seventh Son (shot by Newton Thomas Sigel, ASC) can be seen in theaters in 2013, as well as in dozens of other projects, including VFX-heavy 3D projects like Prometheus and the upcoming Jack the Giant Killer, and indie documentaries like the highly-acclaimed Waiting for Superman.
Jon Silberg talked to Nakamura in his DI Theater at Company 3’s Santa Monica base after a day’s session on Jack the Giant Killer, where the colorist used DaVinci Resolve to show the filmmakers and VFX artists approximations of how composited effects would look in the finished film. Nakamura discussed the history of color grading – from optical to digital – and how his relationships with cinematographers deepen and change on every show.
ICG: What started you on this path? Stephen Nakamura: All my life I’ve tried to figure out what it is about particular images, whether it’s a movie, photograph or a painting, that I’ve found attractive. It doesn’t matter if the subject is a person or a landscape, there’s an emotion you get from some images and not from others. If you look at any silent movie or an old Twilight Zone with the sound off, it can still evoke an emotion. You have a sense of the story and how you’re supposed to feel about it. I’ve always been fascinated by the way people can become emotionally attached to one work of art but remain indifferent to another.
Did the technical aspect of your job – especially with all the different cameras and formats and displays – come later? No, I’ve always been technical. I was advanced in math and science [at Maui’s Lahainaluna High School] – you know, the stereotypical Asian thing [laughs]. When I was getting ready to go to California to start at Loyola Marymount, my guidance counselor said I should study to become a doctor. I told her I loved to watch movies and TV shows, and she said: “Great, you’ll be a communication arts major!” My mother was freaked out at first, but I said, “Let me try it. If I fail, I’ll go back and do something in math and science.”
This was before digital grading for features existed, so where did you learn your craft? I went through the same steps most of the people did who are coloring features now. I worked my way up the television post world until I eventually got to grade a lot of the big commercials and music videos by the major directors of the mid-to-late ’90s. I’d done a lot of work with David Fincher, and the first time he finished a movie digitally, Panic Room, he asked me to grade it.
You have your own aesthetic sensibility. But you’re collaborating with other filmmakers. Typically, the main objective is to carry out the director’s vision. Of course, that’s generally the goal of the cinematographer and all the department heads throughout the production. If the director wants a gritty, cold look for a scene or a warm, happy feeling, I’ll take what the cinematographer shot and refine it to help it fit into that emotional space. But you have to take great care in my line of work: Our tools are very powerful, and I can essentially “re-light” within certain limitations. I can increase contrast and saturation, build dozens of power windows throughout the frame, sharpen or defocus tiny portions of the frame. Everything has to work toward the filmmakers’ vision.
Can you provide a recent example? On Prometheus there were a number of scenes where the characters are inside a dark cave, and it had to feel like the inside of a dark cave, not a movie studio. On the other hand, the audience had to be able to see enough detail in the characters and the environment to know what’s going on and not feel like the blacks are completely crushed and there’s some technical mistake. There was really no way Dariusz Wolski, or any cinematographer for that matter, could get the effect Ridley [Scott] wanted. There was no place for the light to be coming from and no way it could envelop the people and leave this cave environment dark. So Dariusz lit the set so all the information was available, and then I was able to use secondaries to bring the light on the walls way down so we just see a hint of detail and to let the characters’ space suits seem almost to be lit from within.
Great description of the DP/DI workflow. How about one more? On the movie Safe House with Denzel Washington, the director Daniel Espinosa had a specific look in mind. We watched a lot of movies and realized he wanted the look of Amores Perros – like after going through an old-style telecine transfer off a high-con dupe. It was an unusual look that combined some of the filmic roll-off of the highlights that you get from certain film dupes along with some of the electronic artifacts that used to come up in telecine. It wouldn’t have been practical, or cost-effective, for the cinematographer, Oliver Wood, to create that photochemically, so he gave me a solid negative, and I built custom curves in the Resolve that brought in those attributes. It wasn’t about changing Oliver’s vision. It was about working together with him and the director. Oliver was very complimentary about the final look.
So much can be done in post. Are there still aspects of cinematography that can only be done correctly on set? Definitely exposure. It’s more difficult to fix an under-exposed digital image than some people realize. Everybody knows you don’t need a lot of light with many of the sensors these days, and they know the danger of clipping when you over-expose. But under-exposure can be a huge problem, even with the newest and best camera systems. If there isn’t enough light hitting the sensors in the first place, then no matter how much I manipulate that image to brighten, or just to match it to the previous shot, the image will get noisy. And it doesn’t look like the film grain we’re used to. It’s electronic noise that can be pretty ugly.
When you started doing DI work, the original capture and final product were always motion picture film. That’s all changed. Yes, and it really impacts my job. Different formats, different color spaces, different ways of recording image data, different displays, are now part of the capture process. An image might convey a certain emotional impact when it’s projected with light shining through film that uses colored dyes onto a 40-foot screen that reflects 16 footlamberts of light. That’s not the same as a .rec 709 image on an HD monitor that’s backlit, has a different color temperature and puts out 30 footlamberts.
So how should consistency be maintained from set to finish? It was all pretty simple when it was a timer and his printer lights. There needs to be clear communication from pre-production through delivery. There are so many different factors involved. Is the image logarithmic or linear or rec. 709? Is the color space rec. 709 or P3 log or P3 linear? All these things make a difference, which is why it’s so important for post supervisors to see the pipeline from the outset. I often work on films that will be viewed in different venues, and I’ll do a different pass for each one. We knew Prometheus would be shown in 3D in many theaters at under 4 footlamberts and on some select screens at 6. If you grade something for 6 and show it at 4, you’re going to lose information in the highlights. So we did a pass for 4 and then another one for 6.
And the 3D glasses have an impact as well? Yes. On Prometheus I did some selective sharpening and defocusing of many shots to help guide the viewer’s eye to a particular character or a holographic image. It was sort of a digital way to limit the perception of “depth of field” in post. It worked nicely and looked subtle through the 3D glasses. But those glasses have a real softening effect. When we did a pass for the 2D version, the sharpening and defocusing looked overdone – not organic at all. So I went through the whole film and dialed the effect way back in every shot so it would have the same effect without glasses that it did with them.
The DI is a relatively new process. How would you say it’s changed the most since you started doing it a decade ago? When I started working on features in 2K, it was just a lot more information that had to be processed and moved around. Take that de-focus technique I mentioned. We could do that for TV, but it takes a lot of processing power, so if we tried to do it at 2K, we’d have to either wait around for the effect to render or we’d have to work with lower-resolution proxies. Same thing with tracking. Today, we can track very effectively in real time. In those days you had to go through it by hand, and it could take a long time. If you wanted to do several keys and have a few windows, it could become a really big deal. Today I can build 10, 20 nodes [groups of effects within the DaVinci] and see it right away in real time at full resolution. The biggest change is we can work faster and focus more on creativity.
Do you think higher frame-rate technologies – 48- and 60-frames per second – will catch on? I just hope filmmakers have as many different formats available as possible. Seeing film for so many years, we’ve gotten used to attributes that are more painterly – twenty-four frames per second with a shutter, motion blur, and a relatively soft image because of the characteristic curves inherent in the way film reacts to light. And it’s logarithmic. Those are all very different from HD capture at an NBA game and the feeling you’re right on the court. Of course, we can apply film’s characteristics to digital images, or we can go in the other direction with sharper, linear images and higher frame rates. For me it boils down to what got me into this business in the first place. However movies are shot and displayed, it’s ultimately about how the audience responds to the images emotionally.
Photo courtesy of Company 3 / Siouxzen King
FX's newest anthology series is a tough, compelling watch that says as much about the…
Kira Kelly, ASC, hits the hardwood for Writer/Director Sydney Freeland's personal tale of hooping life…
Directors of Photography Joe “Jody” Williams and Michael Watson mix 16mm Ektachrome with 8K digital…