Modern blockbuster movies seamlessly introduce impossible characters and action into real-world settings using digital visual effects. These effects are largely made possible by research from computer vision. This tutorial will educate students, engineers, and researchers about the fundamental computer vision principles and state-of-the-art algorithms used to create cutting-edge visual effects for movies and television.
The tutorial will take place the day before the main conference begins, from 8:30 AM to 12:00 PM on June 7, 2015 in Room 105 of the Hynes Convention Center in Boston, Massachusetts.
We will begin with a general overview of computer vision algorithms involved in the visual effects pipeline. Next, several computer vision experts and visual effects artists will discuss some of their recent contributions in depth. The tutorial will conclude with a panel discussion framed by the organizer about the challenges of incorporating CVPR-level vision research into the real-world visual effects pipeline, which will include plenty of time for questions from the audience.
The agenda is:
|8:30-9:15||Computer Vision for Visual Effects||Rich Radke (Rensselaer Polytechnic Institute)|
|Computer vision plays a critical role in many visual effects tasks, including image matting, image editing and compositing, feature tracking, estimating dense correspondence between images, camera tracking, motion capture, and three-dimensional data acquisition. While many algorithms used on a regular basis in Hollywood can trace their lineage to academic computer vision research (such as blue-screen matting, structure from motion, optical flow, and structured light scanning), other familiar algorithms from the vision community are not yet used on a regular basis in the real-world visual effects pipeline (such as SIFT feature matching and image retargeting). We will overview the general classes of visual effects tasks where computer vision research can play a role, and discuss the challenges of working with movie data vs. research-lab images.|
|Richard J. Radke is a Full Professor in the Department of Electrical, Computer, and Systems Engineering at Rensselaer Polytechnic Institute. His current research interests include computer vision problems related to modeling 3D environments with visual and range imagery, designing and analyzing large camera networks, and machine learning problems for radiotherapy applications. Dr. Radke is affiliated with the NSF Engineering Research Centers for Subsurface Sensing and Imaging Systems (CenSSIS) and Smart Lighting, the DHS Center of Excellence on Explosives Detection, Mitigation and Response (ALERT), and Rensselaer’s Experimental Media and Performing Arts Center (EMPAC). He received an NSF CAREER award in March 2003 and was a member of the 2007 DARPA Computer Science Study Group. Dr. Radke is a Senior Member of the IEEE and an Associate Editor of IEEE Transactions on Image Processing. His textbook Computer Vision for Visual Effects was published by Cambridge University Press in 2012.|
|9:15-9:45||Matting and Composition in Professional Workflows||Brian Price (Adobe)|
|Matting (creating a selection of an object in an image or video) and composition (inserting an element into an image or video) are fundamental operations for many photograph and video special effects workflow. In this talk, we will discuss the primary workflows currently used by professionals for matting and composition and compare and contrast this to current research in these areas. In so doing, we will highlight where current research is having an impact on professional workflows as well as where current academic research directions are not properly addressing the needs of professional tools or where research methods are failing due to their shortcomings. We hope that this will serve as inspiration for future research that is more amenable to industrial and professional use.|
|Brian Price is a Senior Research Scientist in Adobe Research specializing in computer vision. His research interests include semantic segmentation, interactive object selection and matting in images and videos, stereo and rgbd, and image processing, as well as broad interest in computer vision and its intersections with machine learning and computer graphics. Before joining Adobe, he received his PhD degree in Computer Science from Brigham Young University under the advisement of Dr. Bryan Morse. As a researcher at Adobe, he has contributed new features to many Adobe products such as Photoshop, Photoshop Elements, and AfterEffects, mostly involving interactive image segmentation and matting.|
|9:45-10:15||Advances in Photoreal Digital Humans in Film and in Real-Time||Paul Debevec (USC)|
|We have entered an age where even the human actors in a movie can be created as computer generated imagery. Somewhere between “Final Fantasy” in 2001 and “The Curious Case of Benjamin Button” in 2008, digital actors crossed the “Uncanny Valley” from looking strangely synthetic to believably real. This talk describes how the Light Stage scanning systems and HDRI lighting techniques developed at the USC Institute for Creative Technologies have helped create digital actors in a wide range of recent films. For in‐depth examples, the talk describes how high‐resolution face scanning, advanced character rigging, and performance‐driven facial animation were combined to create “Digital Emily”, a collaboration with Image Metrics (now Faceware) yielding one of the first photoreal digital actors, and 2013’s “Digital Ira”, a collaboration with Activision Inc., yielding the most realistic real‐time digital actor to date. The talk includes recent developments in HDRI lighting, polarization difference imaging, and skin reflectance measurement, 3D object scanning, and concludes with advances in autostereoscopic 3D displays enabling 3D teleconferencing, holographic characters, and cultural preservation.|
|Paul Debevec is a Research Professor at the University of Southern California and the Chief Visual Officer at USC’s Institute for Creative Technologies. From his 1996 Ph.D. at UC Berkeley, Debevec’s publications and animations have focused on techniques for photogrammetry, image‐based rendering, high dynamic range imaging, image‐based lighting, appearance measurement, facial animation, and 3D displays. Debevec is an IEEE Senior Member and Co-Chair of the Academy of Motion Picture Arts and Sciences’ (AMPAS) Science and Technology Council. He received a Scientific and Engineering Academy Award® in 2010 for his work on the Light Stage facial capture systems, used in movies including Spider‐Man 2, Superman Returns, The Curious Case of Benjamin Button, Avatar, Tron: Legacy, The Avengers, Oblivion, Gravity, and Maleficent. In 2014, Debevec was profiled in The New Yorker magazine’s “Pixel Perfect: the scientist behind the digital cloning of actors” article by Margaret Talbot. He also recently worked with the Smithsonian Institution to scan a 3D model of President Barack Obama.|
|10:30-11:00||The VFX pipeline||Brian Drewes (ZeroFX)|
|This talk will discuss the off-the-shelf software most commonly used by facilities creating complex effects as well as where custom software tools can help grow efficiency and flexibility when creating and integrating CG assets with photographic elements.|
|Brian Drewes has 20 years of experience creating visual effects and is the CEO and Co-Founder of Zero VFX which provies visual effects and animation services for the feature film and commercial markets. Zero primarily focuses on ‘invisible effects’ based upon photographic assets. Brian also successfully oversaw the start-up, funding, market strategy and software development of ZYNC render, acquired by Google, Inc. in 2014.|
|11:00-11:30||Scanning the Stars: Making 3D Digital Assets for Hollywood VFX||Michael Raphael (Direct Dimensions)|
|Over the past seven years, Direct Dimensions has become a go-to company for Hollywood movie studios for 3D scanning film sets, props, vehicles, locations, and even many A-List actors. Michael will showcase the latest methods and technologies to create 3D models for visual effects (VFX) for released major films, including several where the films won an Oscar.|
|With a portfolio of over 25 major motion pictures on the Direct Dimensions IMDb page, Michael will show behind the scenes images of scanning using a wide range of 3D hardware and software including their full body camera rig for instant 3D capture, lidar location scanning, close range scanning of costumes and small props, and much more. Michael’s teams have been on set in locations like New York, New Orleans, Vancouver, California, and even Iceland for some amazing projects. Due to the nature of the materials, no photography or recording of any nature will be permitted.|
|11:30-12:00||Panel discussion: VFX in academia vs. the real world||All|