Bluescreen for TV Shows

Grey's Anatomy Greenscreen

Movies are often filmed on location, though this can be a complex and expensive process; streets need to be blocked off, permits need to be acquired, and so on. TV shows often can’t afford outdoor location filming, in terms of both time and money. On the other hand, there are outdoor shots in lots of TV shows; how do they do it? The answer is the extensive use of blue and green screens, which are replaced in post-production by realistic backgrounds. This is sometimes called the “virtual backlot”, as illustrated in this great demo reel from Stargate Studios:

Most of these blue/greenscreen effects are imperceptible to the viewer: everyday shots like two characters walking down a city street, or a character talking on their cell phone in front of a city skyline. Most of these shots are from TV shows that aren’t associated with flashy effects, like medical shows, law-and-order procedurals, and family comedies. I was especially impressed by the clip from Ugly Betty starting at about 2:12; hardly anything in this scene was “real”.

Here’s a longer look at the effects Stargate did for ABC’s Revenge, a lot of which takes place in houses near the ocean. In many cases, the camera isn’t moving much, which makes the problem easier, but there are a couple shots that follow characters as they walk around a wrap-around porch that I thought were particularly impressive, starting at about 1:44 and 2:14:

In this case, some matchmoving is probably involved, as opposed to the pan/tilt shots where one can get away with different views of a spherical panorama. Keep in mind that these effects need to be turned around by the VFX company in a week (or less), so there isn’t that much time to polish the tiniest details like wisps of hair.

Post Magazine has a great article on the types of visual effects involved in last season’s new TV shows — not just bluescreens for background replacement but more advanced work like changing the season of a shot or adding CGI creatures.

Advertisements

L.A. Noire Facial Capture

High-quality facial motion capture for filmmaking (e.g., Rise of the Planet of the Apes, Avatar, TRON: Legacy) is usually done with a combination of visible marker dots and a head-mounted rig (on-set) and the MOVA Contour system of phosphorescent makeup (off-set). There’ll be a longer blog post on this later, but the video below from Digital Domain on the de-aging effect of Jeff Bridges in TRON: Legacy illustrates the idea.

However, video game developer Team Bondi took a different approach for their 2011 video game L.A. Noire. They created a custom multi-view stereo environment, pictured below, to capture the 3D face and hair of a large number of performers, which was later compressed and streamed directly into the game.

I started playing it last night and the effect is really striking! The below video explains the process in more detail with many examples from the game. The technology, called MotionScan, was created by a company called Depth Analysis. Unfortunately, Team Bondi is no longer around and it remains to be seen whether this approach will resurface in a new game or movie.

Greenscreen Practice Plates

So you’re a computer vision researcher and you think, segmentation with a green background, feature tracking, structure from motion — how hard could creating visual effects be? Try putting your money where your mouth is with these free HD greenscreen videos created by Hollywood Camera Work. The source videos illustrate tough matting problems involving wispy hair, transparent clothing, and motion blur, and well as matchmoving problems at various ranges with different numbers of artificial tracking markers. There’s also a page with free natural-environment tracking videos that provide good practice on feature detection/tracking and matchmoving.

In addition to their intended use for helping VFX artists who are just starting out, these videos would also be a great resource for making homework problems in a course that uses my book. Found via Scott Squires’s great VFX blog.

Olympic VFX

If you’ve been watching television coverage of the London 2012 Olympics, you’ve probably seen plenty of impressive visual effects. The underlying technology for generating these effects is quite similar to that used for film and television production, with the caveat that all the effects have to be generated in real-time (or quickly enough to be shown in an instant replay).

Virtual lane markers

The most common example, present in the coverage of almost every Olympic sport, is a motion graphic superimposed on the raw video that labels the lanes that runners or swimmers are in, or highlights what line a competitor has to beat in order to win a race. These types of effects have been around for a while, and are created using feature tracking and well-calibrated cameras that know their relationship to the 3D plane where the graphics should go (e.g., the track or pool surface).

There’s a cool, more advanced effect being deployed in diving and gymnastics coverage in which the camera seems to swivel around the athlete in 3D as they’re frozen in mid-air (a la The Matrix). This is done with a combination of real-time foreground segmentation from multiple cameras, a fast multiview stereo algorithm, and a version of production visual effects software like the Foundry’s Nuke. The underlying approach is clearly explained in this video about the i3DLive system developed by the Foundry, the University of Surrey, and BBC Research and Development. Sorry I couldn’t figure out how to embed it here, but it’s really worth a look.

There’s a ton of interesting information on the BBC Research and Development web site. For example, check out these pages on augmented reality athletics, markerless motion capture for biomechanics, and all sorts of “Production Magic” techniques. These are great applications of computer vision for visual effects!

In the US, many of these types of effects (e.g., the virtual first down line in football) are created by a company called Sportvision; many video examples can be seen here.

Classical Matte Paintings

Chapter 2 of the book is all about Image Matting, the separation of a natural image into foreground and background elements. It’s not quite like putting a jigsaw puzzle together, since the “pieces” are fuzzy (e.g., background partially shows through an actor’s wispy hair). The matting problem gets its name from the way scenes in old-school Hollywood movies were created; expert artists would create large, detailed paintings on panes of glass placed between the camera and the set. The result would be that live action fused (hopefully) seamlessly with the matte. The image above is a classic shot from Raiders of the Lost Ark, where you can see the gray region is the clear part of glass through which the scene was shot. As you can imagine, matching the perspective and lighting of the live action is very tricky!

You can find many pictures of classical matte paintings online- for example, see the great list at Shadowlocked. However, I had a hard time finding a good picture showing a glass painting in-line with the camera path to produce a composite. The best I could come up with was this example from a 90’s miniseries called The Last Days of Pompeii, in which the volcano is painted on glass at the upper left and you can see how the painting and real scene line up:

I found this example on the blog Matte Shot, which is a great, detailed resource. This picture is from a long article about master matte artist Leigh Took. In my research, I also enjoyed Raymond Fielding’s book Techniques of Special Effects of Cinematography, which has lots of details on the “good old days”.

3D Models of Classical Sculptures

It’s become increasingly easy for the average person to create 3D models of objects simply by taking lots of images. This problem is also known as multiview stereo, and many ways to approach it are discussed in Section 8.3 of the book.

Recently a team of volunteers went to the Metropolitan Museum of Art to acquire lots of pictures of classical sculptures, which were then processed into 3D models using Autodesk’s 123D Catch software. This free multiview stereo software makes it really easy to make your own 3D models. The multiview stereo algorithms under the hood are from acute3D, a French company that had a great presentation at CVPR 2012.

This article from MakerBot and this article from the Creative Project have more details and pictures from the project.

Pre-digital Photo Manipulation

Photo manipulation didn’t start with Photoshop! Several famous historical photos were actually manually altered.

For example, this image of General Ulysses S. Grant from the mid-1800s was actually constructed from three source images taken in very different places at different times. General Grant’s head was taken from one image, the body and horse from a different person in another image, and the background from an entirely different scene. Section 3.3 of the book addresses automatic ways to solve this problem.

This image of William Lyon Mackenzie with Queen Elizabeth from 1939 is an early example of manually inpainting a large hole with complex texture. King George VI was fully removed from the picture! Section 3.4 of the book addresses automatic ways to solve this problem.

These examples came from this slideshow at the New York Daily News.