Understanding The Video Creation Pipeline

So you want to make videos?

But where to start?


The typical process is for a video project is:

planning -> shooting -> rough edit -> VFX & SFX (sound effects) -> compositing -> more editing (-> more shooting -> …)

Let’s begin with the names of some common items and tasks and define what they are.



StudioBinder.com has a great definition for visual effects:

Visual effects (VFX) is a term used to describe imagery created, manipulated, or enhanced for any film, or other moving media that cannot take place during live action shooting. VFX is the integration between actual footage and this manipulated imagery to create a realistic looking environments for the context.

The difference between VFX and special effects (SFX) is that VFX are added after shooting. SFX are done during filming.

SFX (Special Effects)

Special effects Our illusions createdTo be used during filming such as prosthetic make up, pyrotechnics, etc.

SFX (Sound Effects)

All the audio magic that goes into making a video. From in your face, to subtle folly work.


Automated Dialogue Replacement (ADR) is the process of re-recording actors dialogue either to improve the sound quality or to change what was said. It is also often used to refer to voice actor recordings for games and animation.


Is an abbreviation for Computer Generated Imagery.

This can be both 2D and 3D.


Assets are all the things that go into making a video or film. From video files, photographs, 3-D camera motion tracks, to scripts and other things.


Before digital system moving images and sound were captured on long strips of photographic film. Editing was done by cutting and splicing lengths of film together. This was known as linear editing.

Digital video editing systems allow clips of video to be stored as files. These files can be dissected and reassembled in any which way and even use more than once. There is no requirement to assemble them in a strictly linear fashion. Hence digital video editing is called non-linear editing or NLE.

Colour Correction

Color correction is the process of correcting problems in the color of captured video. The goal of color correction is to create a video that looks as it would to the human eye.

See Studio binder article for more details.

Color Grading

Color grading is the art of modifying the color profile in video to help communicate the story. Adjustments in the hue, saturation, and brightness of a clip can greatly change it’s emotional impact. This is an artistic process compared to color correction which can be scientifically measured.

Rotoscoping is the process of changing the individual frames of a film or video. This could be painting on make up, removing skin blemishes, to masking out and tired areas of a frame.

Note: all masking is rotoscoping, but not all rotoscoping is masking.


Masking is the process of indicating an area of interest in a video sequence. Masks are often used to keep only a specific area of a video sequence, such as an actor, so that they can be composited together with other footage.

However, they have many more uses, such as selectively color grading an area of interest.

Note: masking is a type of rotoscoping.


Compositing is the process of overlaying multiple video sequences to create a final shot.

Each of the video sequences may be individually color graded masked, and otherwise manipulated, to create the final look.

Motion Tracking & Match Moving

Motion tracking is the process of analyzing a video sequence to recover the 2D motion of items in the scene.

Match moving extends this to 3D: not only tracking the motion of objects in the scene but also deriving the camera’s motion.

Planar (2D) motion tracking allows for items to be “pinned”- a replacement 2D image can be inserted over something and 2D transformations (translation, rotation, scale, skew) can be applied to keep replacement coverage.

However skew may not look correct.

Match move recovers the camera’s motion and can be used to create a 3D representation of the scene. This allows for the creation of replacement items in 3D which will look (more) correct when composited.


Keying is the process of automatically removing part of a captured scene. The area to be removed can be indicated by color. This is known as color, or chroma, keying. This is the famous green screen or blue screen. However other colors can be used- such as red. It’s important to choose a color that does not appear on the thing you want to keep.

Keying can also be done based on the brightness. This is called a luma key.

Lighting is very important for keying. This includes the distance between the actors and the color screen. Too much reflected light from the color screen will cause color to spill on to the actors. This can make it difficult or impossible to pull the key.

Tools For Each Task

Many tools can be used for more than one task.

They will be listed in approximate “best-for-this-task” order.


Script Writing


  • 3DS Max, Maya
  • Blender


  • Davinci Resolve, Avid, …
  • Adobe Premier
  • Hit Film
  • Blender

Color Correction & Grading

All Compositors and most NLEs.


Most are node based; HitFilm is layers based.

Layers are easier for most beginners to understand, but node based systems are more powerful.

  • Nuke
  • Natron
  • Fusion (part of Davinci Resolve)
  • Blender
  • HitFilm

Motion Tracking

All Compositors


All Compositors and some NLEs.

Note: Compositors will have a lot more keying functionality and control than NLEs. Even packages that offer both NLE and compositor features will often have more advanced keying available in the compositor module.