Flare Expreriment
An introduction to visual effects - part of FutureLearn's "Visual Effects for Guerrilla Filmmakers course".
1 December 2018
Project information
- Date of production:
- February 2016
- Created on:
- HitFilm 3 Express
- Course material:
- Future Learn
The opening assignment for the Future learn Visual Effects for Guerrilla Filmmakers course. This was a one week assignment that focused on "tracking" also known as match moving. Match moving is a cinematic technique that allows the insertion of computer graphics into live-action footage with correct position, scale, orientation, and motion relative to the photographed objects in the shot. The term is used loosely to describe several different methods of extracting camera motion information from a motion picture. Sometimes referred to as motion tracking or camera solving, match moving is related to rotoscoping and photogrammetry.
Importantly it must be noted that this is not the same as motion capture, which records the motion of objects, often human actors, rather than the camera. Typically, motion capture requires special cameras and sensors and a controlled environment (although recent developments such as the Kinect camera and Apple's FaceID have begun to change this).
Match moving is primarily used to track the movement of a camera through a shot so that an identical virtual camera move can be reproduced in a 3D animation program. When new animated elements are composited back into the original live-action shot, they will appear in perfectly matched perspective and therefore appear seamless.
Principle
The process of match moving can be broken down into two steps.
Tracking
The first step is identifying and tracking features. A feature is a specific point in the image that a tracking algorithm can lock onto and follow through multiple frames (SynthEyes calls them blips). Often features are selected because they are bright/dark spots, edges or corners depending on the particular tracking algorithm. Popular programs use template matching based on NCC score and RMS error. What is important is that each feature represents a specific point on the surface of a real object. As a feature is tracked it becomes a series of two-dimensional coordinates that represent the position of the feature across a series of frames. This series is referred to as a "track". Once tracks have been created they can be used immediately for 2D motion tracking, or then be used to calculate 3D information.
Calibration
The second step involves solving for 3D motion. This process attempts to derive the motion of the camera by solving the inverse-projection of the 2D paths for the position of the camera. This process is referred to as calibration.
The flare
In this situation students were supplied with two pieces of footage;
- The actor with an unlit flare looking around the room and lit as if he has a working flare,
- Stock footage of the burning top of the flare.
The two pieces of footage were to be composited, by layering the burning flare over the end of the flare. This involved tracking the end of the unlit flare to match its movement to the footage of the burning flare, so it would appear that the actor was holding a lit flare.
But why not just shoot the actor holding a lit flare?
There are several real world examples of why this may not be practical and is better rendered with a visual effect.
- Cost - multiple takes, multiple flares, these all add up.
- Safety - flares burn at incredible temperatures and insufficient safety precautions are a health and safety nightmare waiting to happen.
- Small crew - no requirement for safety precautions means less crew required for a small space, and will also keep costs down.
- Flexibility - less costs, less crew, and less procedures means that different takes, can be attempted, say the script suddenly calls for the flare to be knocked out of the actors hand?