Step 1: Capturing Light
We start with the lens. Some key parameters are:
- size (larger can collect more light)
- angle of view
- magnification/focal length
- distortions (not being perfectly symetrical leads to changes in shape; imperfect consistency can lead to color aberations)
After the lens the light goes through the aperture and shutter. The aperture is usually a round opening that can be made larger or smaller.
It has two effects:
- Controlling the amount of light passing through to sensor - the camera’s sun glasses
- Changing the aperture size alters the depth of field: what range of distances will appear crisp and in focus. It seems counter-intuitive at first glance: a very large aperture results in a narrow depth of field, or a very produced/cinematic look. BUT this can allow to much light into the sensor, so to achieve a very narrow depth of field may require filters to be added to the lens (sunglasses for the camera)
Aside: large aperture (or “fast”) lenses also allow for pleasing effects like bokeh.
The shutter controls how long light is allowed to fall on the sensor.
The shutter speed (how quickly it opens and closes) has two main effects:
- limiting the total amount of light reaching the sensor
- controlling the amount of motion blur
Let’s clarify (pun intended) about motion blur. Given a camera pointing in a fixed direction, and a bouncing ball in front of it, what happens to the light bouncing off a single point on the ball? With a slow shutter speed (long exposure) the light from that point will be detected across many points on the camera sensor; it will be smeared out, indistinct; blurred. With a fast shutter speed (short exposure) the light from that single point on the ball will be detected on far fewer points on the sensor. The image will be more distinct, clear; not blurred.
Finally the light is captured by the sensor. Some key factors:
- how quickly they read out the sensed light (less often uses less power.) Usually configurable.
- how sensitive they are to light. Usually configurable by controlling how much amplification applied.
- what range it has: human eyes are able to see an incredible range (difference) in brightness in the same scene. Most camera sensors can detect a much smaller range, “clipping” all darker parts and/or brighter areas to the same value for luminosity (brightness.) Subtle areas of shading will be lost and turned into a solid patch of a single colour/intensity. Some cameras improve this with “HDR” (high dynamic range) imagery.
- how much noise (random aberations) they introduce.
- how the colors are sensed - eg stacked elements (RGB sensing elements on top of each other- less blur) or side-by-side (easier to make); with a Bayer filter or not.
To take a good image of a given scene with traditional cameras requires carefully juggling the aperture, shutter speed, and sensor sensitivity. The combination of all three is the exposure setting. This balancing act is often depicted as the exposure trinagle. This TechRadar article is a good introductory explanation.
Important: shutter speed is NOT the frame rate of video!
If we’re streaming at 30fps we want a shutter speed that is usually twice the speed or faster (half the frame time or shorter) so that we get crisp, clear footage! An example- 30fps -> shutter speed of 1/60th of a second or faster.
Note: in film/video the shutter speed is often called shutter angle. In traditional film cameras, the shutter is a spinning disk that spins once each time a frame of film is advanced, ready to be exposed. The normal disk is called a 180˚ shutter- as half of the disc is open. So the shutter is open for half the time of a frame. For more details read this or this.
- we may not have “studio” lighting for all scenes -> advantageous to be able to adjust lens (size), aperture, and shutter speed
- flexible focal distances and depth of fields - to be able to focus on a project/demo close up, or a room of people
- Good (fast, little wander, good face detection) autofocus is required
- Several mounting options are needed - full-size and desktop tripods, preferably with tlit/pan handles. Flexible mount (eg Joby Gorrilapod) for lighter cameras
Step 2: Processing
In camera (image processing) can be roughly grouped as corrections and effects.
Examples of corrections include color grading/correction, demosaicing, etc. Effects might include chroma-key (“green screen”), graphical borders, etc.
- we have no use for effects: we will do them in the streaming software where it is easy to turn them on/off, adjust, has more options, etc
- corrections that improve the image quality could be useful
Step 3: Live Output
For most non-professional cameras live output is for monitoring the output to verify you’re recording what you want. Usually it’s the same image as what is shown in the viewfinder or LCD screen- including all the overlay information about focus, exposure, etc. Most newer cameras have an HDMI output for this. Some also have USB, but this is usually only for the transfer of recorded photos and videos. However there are a few cameras that will output partial or full quality video via USB.
On a few- but increasing- number of cameras, all the overlay information can be turned off and we can get a full quality clean output. This is called clean HDMI.
- be careful, some cameras can produce clean HDMI but not at full resolution and quality, or by disabling features (like auto-focus.) Canon does/did this alot to differentiate their consumer and prosumer/professional DSLR cameras.
- live preview can be used to work around the time (and space) recording limits
Professional cameras may have SDI output. SDI carries unencrypted video, audio, and timecodes; but licensing makes it rare on pro/con-sumer cameras.
- we need clean, full resolution, full framerate, HDMI output
- being able to use USB instead of HDMI would be a big plus
- 4:4:4 or 4:4:2 chroma sub-sampling, or 10 bit (HDR) output are nice to have (eg for green screens)