As 3D has moved out of theme parks and into story-driven movies like the game-changing Avatar
, their presence in theatres has grown exponentially. Early arrivers like 2005’s Chicken Little
and 2007’s Beowulf
grew into around five 3D films in 2008, ten in 2009, and over 20 scheduled for 2010. 2011 already has over 30 films planned for 3D release—and those are only the ones that have been announced.
Helping to drive this exponential growth is the process of 2D-to-3D conversion. Instead of capturing the 3D on-set using a stereoscopic camera with two distinct lenses, a number of companies use proprietary processes to convert an image from 2D to 3D. After Avatar
made waves with its 3D in late 2009, many studios selected projects for conversion, quickly increasing the amount of features planned for 3D release.
“Most of the 2D-to-3D process is a visual effect, it’s just a specific application of different [FX] things,” explains Matt DeJohn of In-Three (www.in-three.com
), which converted part of Alice in Wonderland
. “We’re creating mattes like you would on a green screen, we’re keying out characters, we’re doing paint like you would in rig [lines used in stunt work] removal, and we’re modeling, like you would in CG.”
While every conversion company goes about the process in a different way, each with its relative strengths, conversion generally requires three steps: separating out the different elements in a shot, adding depth, and painting in the gaps. In the first step, effects artists define each element and separate out characters, objects and background elements. Here, the biggest challenge for many companies is elements like transparencies, smoky areas and wispy hair, which are difficult to separate from the background. In-Three uses rotoscoping for this step, a technique in which artists and/or computer programs trace over live-action sequences. “In Alice in Wonderland
, she has a lot of wispy hair,” DeJohn notes. “A bad version of the roto for her would make it look like she had a haircut that took all the wispy hair off her head, or conversely, it could be falling to the background and stuck to the background.”
), which also had a hand in Alice in Wonderland
, doesn’t use rotoscoping for this step, instead applying a pixel-specific process developed from the company’s original focus, colorizing black-and-white films. As Barry Sandrew, the founder and president/COO, describes it, the creative team will decide which pixels in an object to separate in a couple of representative frames in a shot. At the end of the day, they send their work to India, which carries out the work over the rest of the shot, a labor-intensive process in which those two frames may grow to ten seconds’ worth of frames. Because the Indian office is twelve and a half hours ahead of San Diego, work goes on continuously, passed off to the next office at the close of each business day.
In the second step, artists gauge the depth in a scene, inserting depth cues that pop out elements and draw others farther back. The work involves both skill and the opportunity to make artistic decisions. DeJohn, who started out as an artist, speaks highly of In-Three’s artists. “We’ve been doing it for five years and our best artists have been there for that long. We know how to look at a 2D image and recreate the depth realistically. That’s definitely a learned skill.”
Prime Focus (www.primefocusworld.com
), which converted Clash of the Titans
, emphasizes its ability to change depth in real time. “What’s cool about our process is it’s iterative,” Rob Hummel, CEO of post-production in North America, enthuses. “Meaning that if you don’t like the depth cue, if we made it look like he’s five feet in front of the tree and you want it to look like he’s ten feet in front of the tree, we can stop and make those adjustments. In fact, we can stop on a frame and show you these adjustments in real time, interactively, and kick off a render and show you the new rendered version with new depth cues to you 15 minutes later. There’s not a negative cost impact if you change your mind late in the game.”
Besides creating space between elements, artists also need to mold out the objects. “You want to convert a beach ball so it doesn’t look like a disc, it looks like a sphere,” Hummel offers as an example. Overly flat objects can make the 3D objects look like cardboard cutouts. To round out an object, some companies create a CG model of the object, on which they overlay the image. Sandrew feels that process is “primitive.” Legend3D “separates out all the different parts of every single object, and then within our program we mold it. We don’t create CG elements, we actually mold this into the actual object in 3D space.”
In the final step of the process, artists fill in gaps that appear when you create more extreme depth. Part of the reason why Prime Focus’ “iterative” depth process works is because it avoids the paint step. In the over 2,000 shots in Clash of the Titans
, Hummel says they used the paint step just 12 times, and in another feature film they recently participated in, only 30 adjustments were painted out of over 600 shots.
However, not everyone in the conversion business accepts this kind of work. DeJohn is skeptical of a process that avoids the paint step. “If they actually created distinct separation between their objects, they would have to paint because there’s just no way to create the effect otherwise. So what they’re doing is compromising the quality of the depth in order to avoid the detailed paint and matte work.” DeJohn describes the Prime Focus approach as creating more of a “rubber sheet” effect. “If you imagine the movie screen like a rubber sheet and you push and pull to get a 3D effect, it’s popping off the screen and pushing into the screen. You essentially have no distinct separation,” he declares.
In response, Hummel says, “I think it’s quaint to say that you can only do it with paint, but that’s the same kind of person that, if they were going to manipulate a photograph, would do it in a darkroom, rather than with Photoshop. We live in a digital world with digital manipulation tools where we’re able to manipulate the images in ways that don’t require that kind of burden. It’s not like we never paint, but I would say it’s less than one percent of the shots we do.”
Legend 3D also doesn’t do extensive paint work, using a technique that Prime Focus also employs. “We can create two eyes, which is a more natural way of handling it,” Sandrew explains. While most companies keep the original flat image as the left-eye image, for example, and create a new right-eye image, Legend3D will usually create two new images, which minimizes the size of the gaps. “If you do one eye, the gaps are quite large, but it’s only in one eye. So anyone with compositing skill can clean that up. If you create two eyes, then the gaps are separated into both eyes, so they’re about half as wide. We have automatic gap-filling that’s algorithmic—a lot of people don’t have that—provided the gap is not too wide.”
2D to 3D conversion has a wide range of applications and offers an alternative to many of the challenges of shooting “natively” in 3D. Even filmmakers shooting with a 3D camera may need to use 2D-to-3D conversion for select shots or to fix technical problems. Because each technique has strengths and weaknesses, the public will be seeing more 3D films that use both cameras and conversion to achieve their effect.
“I think the hybrid approach is probably the smartest approach for a lot of the studios, a lot of the producers and directors,” Sandrew observes. “One of the projects we are about to get into was originally supposed to be a 2D-to-3D conversion completely, and it turned out to be about 50% captured natively and 50% 3D conversion. For most of the heavy effects scenes, it’s much more efficient and effective to use 2D-to-3D conversion rather than capturing in a camera.”
“3D conversion is just another tool in a filmmaker’s arsenal,” Toni Pace, a senior visual effects producer at Legend3D, explains. “Even if they are committed to shooting in stereo on-set, the camera can be misaligned, it could be too big to take to a location, it could be a hostile environment like water.” The technique also gives filmmakers a chance to “revise their decision” in post if they want to change the depth in a scene, for example.
2D-to-3D conversion was even used in that pinnacle of 3D cinema, Avatar
. Prime Focus did several shots for the movie, a fact not widely known. “A couple times the best way to incorporate the composite was to convert part of the image,” Hummel reveals. “On an occasion or two with the original photography, there was a problem, like one lens was closer than the other, so rather than resizing the image, it was easier to convert one of the images to match the stereoscopic imagery. However, it was always done in service of the visual effects and under the guidance of the Avatar
The conversion of library titles into 3D is also anticipated to be a huge market. Currently, conversions are underway for 2D films with upcoming 3D sequels. “I think it will take one really big film to test the waters to see if the public will go back to the theatre to see it in 3D,” DeJohn speculates.
The home market will provide yet another venue for 3D viewing. The newest line of HD televisions is going to be bundled with 3D capability for no extra charge, making the technology easy to embrace. Studios hope 3D titles will ramp up Blu-ray sales. Currently, theatres have an edge on the 3D viewing experience. Many home 3D sets use expensive active shutter glasses, and “a little bit of depth on a smaller screen is pretty unimpactful,” DeJohn observes, noting that a film like Avatar
will still look good on a small screen.
So far, 2D-to-3D conversions have generated some negative comments in the marketplace, but this hasn’t stopped their growth. With so much business coming in for more conversions, no one seems worried. “I don’t think 3D is going to disappear at all because there’s too much momentum right now from the consumer-electronics industry, the studios, the exhibitors. It’s huge. The momentum that’s been built up is so significant, it’s unstoppable,” Sandrew argues.
The whipping boy of 2D-to-3D conversion was this April’s Clash of the Titans
, which drew the ire of critics, moviegoers, and people that didn’t see the movie. “Even though it was trashed worse than any movie I’ve ever seen, it still broke a record for an Easter film and made over $200 million,” Sandrew says of the movie, which was converted by Prime Focus, a competitor. “The only problem they had was taking on a project that didn’t [give] enough time to do it right, that’s the only thing that they were responsible for. The audience is demanding 3D, and no matter how badly critics trash a movie, people still go to it. The Last Airbender
also got trashed, it was done by StereoD, and that did well too.”
Those working in visual effects acknowledge that bad 2D-to-3D conversion exists, but see it more as a growing pain that market forces will faithfully correct. “The quality is there, it’s a question of whether the studios will pay the money to get that quality product, and give them enough time to do it to the full degree,” DeJohn observes.
Companies also expect that audiences will become more discerning consumers. “I think what’s important is for the audience to become more sophisticated observers—once that happens, the quality is going to go up significantly,” Sandrew predicts. Pace, a recent hire who worked on Avatar
, thinks the unique emotional effect of 3D films will sway audiences. “I think that people really want immersive experiences, and 3D stereo projection in theatres is the next generation of immersive experiences. I think it’s only going to get better from here.”
Exhibiting 3D films properly requires adjustments that not every cinema has mastered. “3D exhibition is kind of like the Wild West right now,” Hummel states. After seeing Clash
vilified, “I believe that something happened in the rollout of that film into theatres. People saw what they saw, but a lot of the descriptions that I read on the Internet were of eyes being flopped—meaning the projectors got out of phase and the left eye was seeing the right-eye information and the right eye was seeing the left-eye information.”
Another potential problem in 3D viewing is “ghosting.” In some theatres, you can see a double image in high-contrast areas. Dialing down the contrast can help fix the problem, but that also reduces the quality of the image.
One of the most noted problems is the loss of light that occurs when audiences put on 3D glasses, which filmmakers such as Christopher Nolan have pointed out as a major defect of the 3D experience. Most projectors throw 14 foot-lamberts, but theatres will still show a movie if their projector is measuring at 12 foot-lamberts, Hummel explains, since it’s just a 14% loss in light. The problem is when you add 3D to the equation. 3D glasses are equivalent to a two-stop loss in light. Going down one stop reduces the amount of light by half, so going down two stops gives you 75% less light. If a projector starts out at 14 foot-lamberts, glasses bring it down to dim 3.5, but if it started out at 12 foot-lamberts, it’s down to just two. What would have been a 14% loss in light in 2D becomes an almost 50% loss in 3D.
“A two foot-lambert swing on normal digital cinema is not that bad, it doesn’t damage the picture that much,” Hummel observes. “A two-foot-lambert swing on 3D is death
. It will absolutely devastate the image and ruin the experience for the audience.”
Some films, like Avatar
, created different DCPs (digital cinema packages) for exhibitors based on the amount of light they could get from their projectors. “No one other than James Cameron and Jon Landau, they made sure the audience was seeing the best representation of the movie,” Hummel enthuses. “If they knew the theatre could show it at six foot-lamberts, they delivered a file to that theatre that looks good at six foot-lamberts.”
As 3D exhibition becomes more commonplace, conditions are expected to improve. Sandrew thinks the “demands of the industry” as well as technological improvements will drive change. “The professionals that are producing the creative product don’t want their product to deteriorate in any way, and certainly when it’s being exhibited they want it shown the same way that they intended.”
While 3D films seek consistent exhibition quality, ironically every audience member experiences 3D in a unique way. “I’m startled,” Hummel says. “I worked many years at Technicolor ensuring that every single print looked exactly the same, and then suddenly people make 3D movies and it doesn’t seem to matter anymore that the people sitting in row five are having a different experience than the people sitting in row twenty-five?”
When audiences watch a 3D film, their seat in the theatre will determine where their eyes converge to see the stereoscopic effect. Looking for that big 3D effect? Viewers seated farther back will experience amplified depth, with images appearing both deeper and popping out more. “If you walk up to the screen, all the things that seem deep behind the screen won’t seem very deep at all,” Hummel explains. “But then as you walk to the back of the theatre, everything will look incredibly deep. If you walk laterally left to right, you’ll notice that the behind-screen images tend to rotate on an axis at the screen plane.”
Perhaps because each audience member views the image differently, 3D movies engender an individual experience—and not just because it’s easier to hide those Pixar tears behind your 3D glasses. “The nice thing about 3D is that it becomes a personal experience. Every audience member is taking it in in a personal way, whereas in a 2D film it’s more about group dynamics,” Sandrew reflects.
3D movies also have the potential to bring viewers outside of the film by throwing an image out into the audience, breaking the fourth wall. “There are some feature films where the director really wants to come out and scare the audience, which is something that most of us hope goes away,” Sandrew says. “There are places for that, there are some movies that are intended to be cheesy like that, and it works. But for the most part, what we like to see is 3D being used as a way of telling the story in a more immersive way. That doesn’t involve scaring the audience by having something fall in their lap, so we try to avoid that.”
One of the “wow” scenes in Avatar
, for example, involved floating seeds from the Tree of Souls. “Those seeds were actually significantly out into the audience,” DeJohn points out, “While in some scenes there is just as much depth as if they were pointing a spear at the audience, they’re not breaking the fourth wall in terms of the story, so people don’t notice the effect as much because it’s not screaming, ‘Look at me!’” While a movie like G-Force
will break the fourth wall—“It’s a kids’ movie and that’s the right genre to do that,” DeJohn believes—movies like Alice in Wonderland
create extreme depth without distracting the audience from the story.
“A lot of the shots in Alice in Wonderland
were actually outside of the screen near the audience,” Sandrew reveals, “But you didn’t realize it, because it wasn’t there for shock value, it was there for aesthetics. If you look at a scene that we did for Alice, you’ll see that maybe half of the interior of a shot is outside of the screen.”
Alice in Wonderland
was also originally planned to use the extra dimension for a Wizard of Oz
-type effect. The real world would be shown in 2D, and Wonderland in 3D. “From what I understand,” Sandrew remarks, “Tim [Burton] was worried that people would just take off their glasses, think it wasn’t working, or think it’s bad 3D, so what they did was create a shallow representation of stereo [3D] in the bookends.”
, “we played a lot with the sense of scale,” DeJohn recalls. “When we had a hamster’s point-of-view shot of a human, we would accentuate the depth of the human, so the human would have more or less the same depth as a skyscraper. We used 3D as a metaphor. I think we’re going to start seeing more of that. Just as composing a shot brings meaning to the shot if you shoot it from a low angle, or if you stage your characters in such a way to indicate who’s more powerful in the scene,” depth can bring meaning to a scene, he expounds.
For 3D to be more than a gimmick, this kind of storytelling will need to become a recognizable part of the 3D experience. Hollywood has undergone great formal changes before, from the lightning-speed transition to sound in just a few years, to the gradual predominance of color films in the marketplace, which took decades. Early entrants couldn’t resist flaunting their new tricks with vivid, saturated color and song-and-dance routines that delighted in the novelty of the experience. Primitive 3D movies, in fact, predate sound, but throughout 3D history the medium has been used for spectacle, not story. 3D faces the bizarrely contradictory goal of making the 3D effect part of the story and thus more invisible, while convincing audiences to spend five extra bucks for a more immersive experience. Compared to 3D’s brief heyday in the 1950s, this generation of 3D has better technology, more exhibitors lined up, and the ability to convert a 2D image into 3D. Audiences have already voted with their glasses, turning out in waves, setting box-office records, and propelling one movie, Avatar
, to Titanic
levels. This time, 3D may be here to stay.
The Science of 3D
What makes 2D-to-3D conversion so promising is that it’s all an illusion anyway. Whether you shoot natively in 3D or convert afterwards, no one would argue that the depth they see in a 3D movie matches what they see in real life, though they might not be able to explain why.
The 3D effect is created through a stereoscopic illusion, which forces your eyes to go against nature and focus on one place and converge in another. In real life, your eyes focus and converge at the same point. Hold a pencil in front of your nose. Each eye focuses on the pencil and the brain converges on the object, creating a composite image and also gauging depth by interpreting the difference between what your left eye sees and what your right eye sees. If you rotate between shutting your left eye and right eye, for example, the pencil will appear to jump left and right, because your understanding of an object in space is determined by both eyes. Each eye has a slightly different perspective on an object.
Stereoscopic illusions work by separating the left-eye image from the right-eye image (the job of those polarized or active-shutter glasses) and giving each eye slightly different pictures of the pencil, forcing the eyes to meet or converge in front of or behind the screen. Because your eyes are focusing on the screen plane but converging in front of or behind it, the illusion of depth is created. When you take off your glasses, you can see double images on the screen. The farther they are apart, the more depth there is when you put on your glasses.
If people are forced to maintain a huge difference between their convergence and focus points for a long time, a headache may ensue. According to Rob Hummel, Prime Focus’ CEO of post-production in North America, that’s also the reason traditional 3D theme-park rides with in-your-face 3D top out at less than 20 minutes: Eyes can’t take it much longer.
The stereoscopic illusion also has to battle with “primal” monocular cues. An object jumping out in front of you initiates the fight-or-flight reaction, a technique well-suited to horror movies like Final Destination 3D
but less so to other genres.
The stereoscopic illusion is also at risk as an object moves to the edges of the screen. If a 3D object gets clipped by the screen border, the illusion can be destroyed. For that reason, many movies use both floating windows, which mask the left and right sides of frame, as well as top-and-bottom masking. For most of the film, these areas will appear black, but occasionally an object will break the edge of the screen, 3D effect intact. The technique has been around for decades, and many studios, such as Disney, employ it on their 3D films, including G-Force
and Toy Story 3
Matt DeJohn, whose company In-Three converted G-Force
, explains: “Certain individual objects are allowed to go outside the border. Because they’re already playing out in audience space, you’re not perceiving it as an aspect-ratio change, you’re just perceiving it as being farther out of screen and in the theatre with me.” Besides avoiding depth conflicts, use of these borders can also create intense 3D effects.
The stereoscopic illusion is also at risk when filmmakers add too much depth too quickly. “For off-screen effects to work really well, you need to bring the objects off slowly, because you’re asking your eyes to do something they don’t normally do, they cross in a way. Your eyes need to be able to track with it,” Hummel explains.
Aware of this effect, companies try to keep depth and eye placement consistent from shot to shot. “What we often do is make sure that where you’re looking in the scene from one shot is fairly well-matched to the next shot, so the focal elements are matched in depth,” DeJohn elaborates. “We’ve been able to play with pretty aggressive depth through entire films. Because of the way the depth was controlled, and transitioned from shot to shot. It was still a comfortable viewing experience.”
So why doesn’t 3D look like real life? “In real life, you have infinite planes of depth that exist instantaneously wherever you are focused,” Hummel reveals. “When you shoot with a camera, you can’t escape the fact that when viewing the images you will be forced into disconnecting your natural focus/convergence connection. The images have a type of depth illusion, but never quite what we see with our natural binocular vision”—a fact many watching a horror movie appreciate.