Hear the fantastic, incredible story of the superheroes behind The Amazing Spider-Man!
Forget The Avengers. When audiences slip on their 3D glasses this month to enjoy The Amazing Spider-Man, they’ll be drinking in the efforts of a clan of super-humans who have battled for nearly three years to bring this epic comic book reboot to the screen. Cinematographer John Schwartzman, ASC worked with a camera system – RED Epic – that was barely beyond prototype stage, with no frame lines or remote on-off switch for one-third of the 3-month shoot. In the DP’s corner (literally) were RED’s technicians (present on the set throughout) diligently creating hundreds of iterations of their software, in real-time, to adapt to the show’s needs.
Other superheroes onboard included senior visual effects supervisor Jerome Chen, VES and stereographer/3D visual effects supervisor Rob Engle, VES (both with Sony Pictures Imageworks), who were constantly modifying their techniques to the new workflow. Schwartzman’s Harry Osborn (Peter Parker’s best friend) was on-set colorist/DIT Brook Willard, who oversaw the smooth operation of dailies, first-pass color timing, and backup procedures, ensuring that director Marc Webb’s unique new vision for Spidey would reach audiences exactly as it was intended. What follows is ICG’s own reboot of the amazing men behind this latest version of Stan Lee’s enduring teenage superhero, told in their own amazing words.
ORIGINS
John Schwartzman: I was a comic book collector back in the 1960s and -70s. I have an extensive collection that includes Spider-Man numbers 1 through 150. The opportunity to reboot the Spider-Man franchise was intriguing, and I liked Marc Webb from our first meeting. I thought he had great energy and was incredibly bright. He told me he wanted the film to look like The French Connection, a New York City with trash and graffiti and closer to what Christopher Nolan and Wally Pfister did with Batman.
Marc Webb: Spider-Man is an ongoing character. He’s been around for 50 years. It’s not like Harry Potter, where there is a closed canon. There are hundreds, maybe thousands of comic books with unique stories and different reinventions of the character. So I thought it would be cool to do that cinematically. We focused on three stories: what happened to Peter Parker’s parents; the story of The Lizard, who I think is a great villain; and the Gwen Stacy saga, which is one of the most interesting, if somewhat controversial, storylines in all of comics.
I wanted to strip away the stylized quality. Sam Raimi had done something unique and cool, something very authentic to what you get when you open the comic book. I wanted this film to feel naturalistic and grounded. I think the handheld images at the beginning of the movie give it a level of realism that can support intimate moments between characters. It’s so easy to lose yourself in the spectacle of these big comic book movies. That’s fun, but I also wanted to hold onto the small moments, to see the inflections and details of each performance.
Schwartzman: Just as we began preproduction, in February 2010, 3D had hit big. There was Avatar, and Michael Bay was doing Transformers in 3D. I think Sony felt like they had the right material with Spider-Man, and Marc and I couldn’t disagree. Spider-Man climbs up buildings and swings through the air. It’s a visceral movie, so why not add the element of depth and shoot it in stereo?
In June 2010, Marc and I did tests with a 2D and 3D camera on the roof of L.A.’s Bonaventure Hotel. We rigged a dummy on a wire, threw it off, and craned down with it. Based on the advice of some friends, I had gone to Panavision and got an Element Technica Quasar rig with two Genesis cameras and two 19-90-millimeter zoom lenses. That system weighed 128 pounds and was about four and a half feet tall. Only one or two remote heads in the entire country could handle the weight. I knew whatever system we used had to be dynamic, and it had to have a full-sized sensor because of the advantages in resolution, depth of field and lensing. Marc didn’t want to do much CG. He wanted to really swing stunt people and shoot them live action.
I approached the engineering staff at Sony and asked if they could build a T-head camera – essentially a lens mount and a sensor, with a fiber-optic cable to carry the signal to an outboard processor/recording device. They attempted to do that, but their focus was more on the F65. Red showed me an early version of the Epic and said they could get it ready in time for us to shoot in December [2010]. The plan had been for The Hobbit to be the first movie to use these cameras. But The Hobbit was pushed back, and Andrew Lesnie was glad to let me do his R&D.
Everyone out there seemed to be renting a box called a stereo imaging processor from 3ality. It measures the alignment of your cameras, and with their rig, it worked like a dual feedback system, with the result that you could take the footage right out of the camera and project it in 3D with few adjustments. Our visual-effects house, Imageworks, was also onboard, in part because the metadata the system produced was useful for them.
By the time we walked out of 3ality, we had a camera that could fit on any tripod or Libra head and didn’t weigh any more than a Panaflex with a zoom lens. It wasn’t as ergonomic as a film camera, but it weighed about 50 pounds – doable for Steadicam. I could put it on a dolly, or on a 15-foot Technocrane inside of a house – or on a 50-foot Technocrane outside. I didn’t have to worry about how I was going to shoot the movie, so the rest of it became about how we wanted to tell the story. After we finished, there were five other movies shooting in 3D and working off our model.
Since our DI vendor, Sony Colorworks, did not feel comfortable working with the Red Epic, we had to design our own in-house workflow. I could look at 2D and 3D monitors during capture, and we had dailies and the vault all on set. I had a bunch of smart young guys, and we had zero data loss on the entire movie.
PROCESS
DIT/colorist Brook Willard: For every single shot John made, we did color correction as well as preliminary technical correction matching for the left and right eyes. We used Red’s tools to create settings files that could be accessed and applied to the raw footage at any point. Anything that can read Red footage can read these .RMD files. Those color-correction settings carried all the way through dailies, editorial, visual effects and to the DI, if needed.
We literally had an engineer from Red writing code in the corner. He’d give me a build of the software, and five minutes later, I’d ask him to change something. John would glance over between shots and approve the images, or ask for something different. By the end of the day, people were walking home with iPads full of at least the first half of the day’s dailies.
People shouldn’t try to take file-based digital cameras and squeeze the images through the same pipeline they’ve been using for film. Every post house has their special sauce, so they can say that their implementation is better. I think that’s a detriment. Let the camera be exactly what it wants to be. Go with the workflow from the manufacturer. We didn’t introduce any special film emulation LUTs, or high-dollar color-correction equipment. We did it all on set with a couple of Macs.
Schwartzman: The nice thing about these CMOS cameras is that you can shoot them like a regular film camera. I used my light meter. I figured out what the film speed of the camera was, and I lit sets the way I would if we were shooting [Kodak] 5219. The Red Epic was 800 ASA in full daylight and 640 in tungsten light. You lose a stop for the mirror for 3D, so we were basically 400 ASA outside and 320 inside. After two weeks, I was able to see light in the way the camera would, which is what we do as cinematographers.
We shot 2.40, and the image was 4.5K. We shot it with a little bit of room on the sides so we could reposition and do the convergence later, in post. We adjusted the interaxial distance on set depending on the shot. Jim Cameron, who was very generous with advice, told us we’d want to redo the convergence on the whole movie anyway.
Webb: You expect 3D to work well in big spaces. But it’s in small spaces where you really feel a sense of intimacy. Katzenberg talks about a scene in How to Train Your Dragon where the audience hunches their shoulders because they feel they are in the small space. It’s a counterintuitive use of 3D, but I think it’s quite powerful.
To me, there are three V’s of 3D: velocity, volume and vertigo. Those are the parts of the brain it can tickle in a specific way, all of which are great for Spider-Man. I wanted to create an environment in a film that exploited all those things.
Schwartzman: The 3D did not slow us down. There was a lot more infrastructure and logistics. If we were starting at a new location, we’d bring the camera crew in a bit earlier to get set up. But after we were set up, everything worked normally. We had a camera crew of 20, where normally I’d have a crew of eight.
Kym Barrett, our costume designer, is best known for Trinity’s black coat in The Matrix and for her work with Cirque du Soleil. The old Spider-Man suit had muscles built into it. This suit looks like it’s sprayed on. It moved beautifully, and shooting 3D allowed Kym to add some tertiary texture. We had different suits for different lighting situations. There was one suit for situations where the light was predominantly sodium vapor, and another for HMI moonlight blue. The light reflected on the suit in interesting ways.
My lighting was the same as shooting 2D. The only thing I learned was that the amount of depth of field you carry on a digital sensor is a fraction of what you carry on film, because of the difference in the circle of confusion. I wasn’t trying to keep everything in focus, but at the same time I wanted to keep it looking like I would on film. I needed to give myself about a two-thirds deeper stop to match the same depth of field for a given lens at a given distance.
TRIUMPHS
Senior visual effects supervisor Jerome Chen: We met with color scientists at Red to figure out the sweet resolution that we could de-Bayer the image down to, and it was about 3.5K. Since the DCI release is 2048 [pixels] across, we devised a padded resolution that gave us ten percent all around the DCI image, so we could do effects work, reposition and do convergence. If we needed to blow something up, we would go back and pull a 4K version. The images were amazing, quite beautiful with their own very distinct look – there’s no grain and no chip noise. A perfectly clean image meant our CG would need to be just as clean.
Stereographer/3D visual effects supervisor Rob Engle: Shooting parallel – setting the IA on set, but not actually converging the cameras – is very similar to the technique we use on CG movies. It has two advantages: first, one less “knob” to worry about in the heat of production, and secondly, since the camera axes are parallel, there is very little keystoning, which makes it easier to look at and easier to process in VFX. The con to this approach is that all your footage must be converged in post, an extra step before it can be properly viewed. Large-format sensors make this technique possible, because without the larger image area, you must converge on set.
Every frame of the movie, whether visual effects or not, goes through the 3D department. Footage is aligned, color balanced or converted to 3D. Fully CG shots need to have stereo parameters set. All visual effects shots need to be scrutinized to ensure that the composites maintain the proper 3D effect [objects are at the right depth and do not have other artifacts]. It is common that an effect that looks fine in 2D will break when seen in 3D. It was part of my job to identify these cases and work with Jerome’s team to adjust them. I’m helping to create a visual effect that creates the illusion of 3D.
Chen: Very rarely did we shoot stereo effects plates. Typically, we would shoot those plates with a single camera, put the effect in and then convert it to stereo. But on the rest of the pipeline, stereo had a huge impact. When dealing with match-moving and tracking issues, you must be flawless so that the CG integrates properly. Finessing that took a lot of R&D in terms of the tools. Imageworks had done a lot of stereo work in the past, but it was basically from pure CG. In this case we had to develop an entirely new pipeline, and that was very educational.
People underestimate the difficulty of doing stereo composites. It’s already so different from a regular viewing experience, so you’re not sure what is an artifact and what is just in the nature of stereo. But you learn to identify artifacts and what needs to be fixed. Unlike in pure CG, in native 3D photography there can be slight variations in how light hits a piece of hair from one eye to the other. It takes a lot of iterations and a lot of patience.
Almost all of my bandwidth is taken up with whether or not things feel right. We use a lot of high-tech tools and software, but once you get through the initial mental process of setting it up technically, when you get to the actual image, it’s always about how it feels and whether it fits in with the next shot. Sometimes there are cases where you plan all these different CG effects – particle, smoke, fluid dynamics. But then you look at the shot and realize you have to keep it simple or the audience won’t understand. Maybe it should just be Spider-Man looking up.
Schwartzman: We shot as much as we could practically, because Marc wanted to avoid CG Spider-Man whenever possible. We were swinging stuntmen from 200-foot construction cranes next to the Flatiron Building in Manhattan. I’m not saying there’s no animation in the movie. But when CG was needed, the Imageworks people were excited to have a very good idea of what a human body swinging through the air at the edge of control actually looks like.
Webb: I knew this project was a burly beast, and John was not going to flinch. I needed someone who could handle a big crew and who wasn’t going to be too precious. John also has the aesthetic sensibility and an understanding of effects. His crew is such a phenomenal group of people – [key grip] Les Tomita is a genius. Red was incredibly supportive and nimble, and the 3ality people did a fantastic job. There were a lot of balls in the air. We were trying to achieve something different, within a different universe. I wanted to give the audience an experience that’s worthy of Spider-Man, and I think we’ve done that.
By David Heuring / photos by Jaimie Trueblood