- Distinguishing Source Files and Their Differences
- Understanding Video File Bit Rates
- Learning About Player Versions and Codec Options
- Determining Your Video Compression Profiles
Determining Your Video Compression Profiles
Now that you have a thorough understanding of your files, bit rates, and codecs, you’re ready to create one or more compression profiles for your Flash-compatible video content. A compression profile specifies, at minimum, the bit rate (or data rate), frame size, frame rate, and keyframe interval for the compressed video output, in addition to the bit rate of the audio track. Some video encoders also let you specify constant bit rate (CBR) or variable bit rate (VBR) encoding. You should plan your compression profile before you encode your video.
Total bit rate
The first number you need to determine for a compression profile is how much bandwidth your video clip requires for delivery. You can then divide the total bit rate between the video and audio tracks.
The bandwidth you should require from a viewer isn’t necessarily equal to the bit rate you use for a clip. Why is this the case? Simply put, a viewer’s Internet connection speed can fluctuate immensely during playback of video—the longer the clip, the more likely the viewer will experience a disruption with connection speed. Many video producers prefer to specify a total bit rate between 70 and 80 percent of the target bit rate. For example, if you’re targeting cable-modem viewers with a tested 768 Kbps download rate, you use a total bit rate of 614 Kbps. Some video producers require twice as much available bandwidth as the video clip requires.
Audio bit rate
Of course, most video content isn’t complete without the audio track. After you’ve determined the total bit rate, determine the bit rate necessary to reasonably reproduce the audio content of your original source. Unless you’re targeting dial-up modem connection speeds, you should use at least 24 Kbps with the MP3 or AAC codec for a mono (single-channel) audio track. Higher-fidelity audio tracks, such as music sound tracks, benefit from higher audio bit rates, such as 48 Kbps (AAC) or 96 Kbps (MP3). Whatever audio bit rate you use, subtract the value from your total bit rate value to determine how much bit rate is left for the video bit rate.
Video bit rate
As you learned earlier, the video track has its own specific bit rate, or data rate. The bit rate you choose for a piece of content should support the frame size and frame rate you want to use. If you specify too little bit rate, the visual quality of your video content suffers. The bit rate for a video file should be based on the bandwidth demands you’re willing to require from your target audience.
Frame size
The width and height of video destined for computer playback should be kept to a square pixel aspect ratio. Any nonsquare pixel aspect video, such as DV or HDV, should be adjusted for computer playback. For example, a DV source file with a 720 by 480 frame size can be resized during the encoding process to 640 by 480, 480 by 360, 320 by 240, 160 by 120, and so on. Similarly, HDV content with a 1440 by 1080 native nonsquare pixel frame size should be resized to 1920 by 1080, 1024 by 576, 768 by 432, 512 by 288, 256 by 144, and so on. You should determine the frame size of your video content in conjunction with the frame rate of the video, as you learned earlier with compression formulas.
Frame rate
The frame rate of the video content should be in ratios of the source frame rate. For example, if the original video frame rate was 30 fps (or 29.97 fps), you should use 30 fps, 15 fps, or 10 fps. The frame rate should reasonably convey the sense of movement in the original content. If your content contains fast-moving subjects, use a higher frame rate. However, as you increase the frame rate, you may need to reduce the frame size of the video to retain reasonable visual quality for a specific bit rate.
Keyframe interval
Video keyframes are a lot like keyframes in Flash tweens; a keyframe specifies a significant point of action in a range of frames in the video clip. A keyframe, also called an i-frame (short for intraframe), is used as a starter frame, drawing the initial visual layer of the video. The following frames, also called p-frames (short for predictive frames), store only the changes in the video frame from the previous keyframe. When you compress video to the FLV or H.264 format, your encoder enables you to specify the frequency of keyframes in your clip, also known as the keyframe interval. Keyframes are best inserted in multiples (or fractions) of your compressed file’s frame rate. For example, if you’re using a frame rate of 15 fps, you might use a forced keyframe interval of 150 frames (one keyframe will be generated every 150 frames). Some encoders allow you to select automatic or natural keyframes, which essentially tell your encoder to make a keyframe whenever enough of the video frame has changed.
The keyframe interval can greatly affect the quality of the overall video clip. If the encoder creates too many keyframes for an allocated video bit rate, the visual quality degrades. If you create too few keyframes, you may not be able to seek or scrub the video very smoothly or accurately. As a rule for higher-quality Web-deployed video, having too few keyframes is better than having too many. The expression bit budget refers to how the bits of a video bit rate are spent, and keyframes use more bits than other frames in your Web video. Figure 3.5 attempts to visualize a bit budget for a theoretical bit rate, representing high-quality keyframes with $10 bills and low-quality keyframes with $5 bills. If the 17 frames of the video use a keyframe interval of 8, then only 3 keyframes are created in the sequence. On the other hand, if the keyframe interval was decreased to four, then five keyframes are created in the sequence. The additional keyframes require more of the bit budget, and, as a result, the quality of the keyframe is reduced—not as many bits are available for the additional keyframes. Remember, the video bit rate is a relative constant in this hypothetical example.
Figure 3.5 A representation of bit budget and keyframe intervals.
In the real world, you can quickly see the result of creating too many keyframes. Figure 3.6 shows a frame pulled from a 30 fps 1265 Kbps video clip that uses a keyframe interval of 1, meaning every frame is a keyframe. With so many keyframe “mouths” to feed, the video quality suffers horribly. Figure 3.7 shows a frame pulled from the same source video clip with an identical video bit rate using a keyframe interval of 60, or one keyframe every two seconds. The image quality improves immensely. Finally, Figure 3.8 shows the source video clip compressed with a keyframe interval of 300. The finer details of the subject and background become sharper.
Figure 3.6 A video frame from a compressed clip using a keyframe interval of 1.
Figure 3.7 A video frame from a compressed clip using a keyframe interval of 60.
Figure 3.8 A video frame from a compressed clip using a keyframe interval of 300.
One important consideration for keyframe intervals used with Flash-based video is the ability to seek to more points of the video clip. With ActionScript, you can seek only to keyframes within the video clip—if you try to seek to a time in the clip that doesn’t contain a keyframe, Flash Player jumps to the closest keyframe. You may notice this phenomenon when you scrub a video clip; the frequency of updates while scrubbing indicates the number of keyframes. However, if you use enhanced seek with video content served by Flash Media Server, the server can generate keyframes on the fly—enabling a viewer to scrub the video more smoothly.
H.264 content using the Main or High profile can utilize another special video frame type called a b-frame, or a bi-predictive frame. B-frames are one of the primary reasons that the Main and High profiles can have a substantially better image than a video using the Base profile. B-frames can refer to past or future frames. How can this be so? How can a frame borrow details from a frame that has not yet played? The encoder can arrange frames out of sequence as they’re written to the compressed video file. On playback, the decoder knows to display the sequence in the correct order—a process known as frame reordering. For example, if you had a sequence of frames such as this:
0 1 2 3 4 5 6
and the encoder wanted to make frame 3 a b-frame that referred to data in frames 2 and 4, the frames would be stored in the following sequence:
0 1 2 4 3 5 6
Upon playback, the decoder reorders the frames to the original sequence. B-frames, though, are more processor-intensive as a result of the frame reordering operation. The b-frame count refers to the number of b-frames per keyframe interval, or GOP. The general rule of thumb is to use a b-frame count of three or lower.
CBR and VBR encoding
Most video encoders support two types of video data rates: constant bit rate (CBR) and variable bit rate (VBR). If you intend to stream video files from a Flash Media Server or streaming video service provider, you should use CBR encoding. CBR encoding ensures a stable predictable bit rate, which means you can avoid potential pitfalls with rebuffering. If you intend to serve video content from a Web server (over HTTP) or a local source such as a CD or DVD, use VBR encoding. VBR encoding enables the encoder to spike the data rate for more complex areas of the video clip and reduce the data rate for visually simpler parts. Figure 3.9 demonstrates the potential effect of a VBR versus CBR with a video clip encoded with an average video bit rate of 500 Kbps. The VBR version is allowed to exceed the average bit rate at a difficult section, while the CBR version stays within tight limits of the average bit rate.
Figure 3.9 The bit rate of a VBR clip plotted against a CBR clip.
Some video encoders let you choose one- or two-pass encoding procedures with CBR or VBR encoding. You’ll usually see better visual results with any two-pass encoding setting. Using two-pass encoding requires twice as much time to encode the source video as one-pass encoding. A two-pass encoding procedure involves one pass where the encoder analyzes the video content to see where bit rate peaks can be optimally applied and a second pass where the information gathered during the analysis pass is applied to video output. A bit rate peak is a point or span of time in the content where there is more data than the average bit rate value allows. With VBR encoding, a section of video that requires less bit rate to encode can offer more bit rate to a more difficult section of video. Consider a video clip showcasing a segment with a talking head video and another segment with fast-paced action scenes. The talking head footage may not need as much bit rate as the action content. Therefore, the bit rate savings in the talking head footage can be applied to the action content, allowing the action content to have a data rate in excess of the average bit rate of the entire video clip.