Video Essentials

Return of the Video Doctor: Simple Fixes for Online Video Errors


This is the Video Doctor, back at you again with relatively easy fixes for common mistakes. In the first article, I identified problems with lighting, clothing, and framing by mostly amateur producers. This time, I’ll focus on a mistakes made by a more professional production group.

Don’t Forget to Deinterlace

The Dan Patrick Show is broadcast live on three networks, and the show uploads multiple streams daily to YouTube. While I was watching a recent interview with Adam Sandler, I noticed that the video exhibited the interlacing artifacts evident here.

The slicing artifacts in the hands show a failure to deinterlace.

The slicing artifacts in the hands show a failure to deinterlace.

Specifically, interlacing artifacts appear multiple ways. In the figure, you can see the faint slicing pattern in Sandler’s fingers, as well as the jaggies around the text in the title and Sandler’s collar. These look even worse when you zoom the video to full screen, as I frequently do on my 31-inch HP monitor. In other frames with higher motion, you can see both fields that comprise a single frame. The cause of all these problems is the same: The video was shot in interlaced mode, and wasn’t deinterlaced when rendered for upload to YouTube.

When shooting for online, it’s best to shoot in progressive mode. However, sometimes, you may not be able to, particularly if you’re shooting for broadcast as well as streaming. In these instances, always remember to deinterlace before encoding.

Interestingly, some programs, such as Adobe Media Encoder and Apple Compressor, automatically deinterlace when producing formats like H.264 encoded in an MP4 wrapper. Many other tools, like Sorenson Squeeze and Telestream Episode, have simple switches you need to engage to deinterlace. It’s simple, it’s fast, and you just have to remember to do it before producing your mezzanine files for uploading to YouTube, or otherwise encoding for web delivery.

Videographers Never Are Off the Clock

The problem with video is that it’s a permanent record; if you produce suboptimal video, and then upload the result to YouTube, it’s up there for the world to see forever, particularly if you’re the White House videographer. Two years later, a new administration comes into town, you’re looking for a new job, and that one bad video, entitled Raw Video: The President Takes a Surprise Walk, haunts you like that college frat picture you never should have posted to Facebook.

Great shot, but otherwise big problems in this White House video.

Great shot, but otherwise big problems in this White House video.

Even the staunchest Democrat would have problems ignoring the issues in this video. Most importantly, it appears to be shot without any kind of stabilization and many of the shots are reminiscent of the Blair Witch Project. Great for an indie film, perhaps, but I’m guessing POTUS isn’t going for the same look. You can address these types of problems in two ways; avoid them or fix them in post.

Avoiding the problem by using some kind of stabilizer system is the best alternative. I’ve used the Glidecam HD-2000 stabilizer on a shoot or two, and found that it vastly improved the stability of the shots I took while walking around, which are precisely the kind of shots involved in the Surprise Walk video. B&H shows how to use the product in this video. Pricing depends on the weight of your camera, but tops out at $649 retail. If your budget it tight, you can Google “build your own stabilizer system” and you’ll find multiple alternatives you can DIY for under $100.

If you don’t stabilize during the shoot, stabilize in post. The are multiple videos on YouTube detailing the process, including this one for Premiere Pro, this one for Final Cut Pro X, this one for Sony Vegas, and this one for Avid. There are also multiple third-party plugins for most of these programs.

From a technique perspective, if you know you’ll be going handheld, consider shooting in 4K or 1080p, and rendering at a lower resolution like 720p. That way, you should be able to achieve a good bit of stabilization with minimal or no artifacting. If you shoot and output at the same resolution, the editor or plugin will have to zoom into the video, and you’ll lose detail.

Audio, Audio, Audio

The other problem with the Surprise Walk video is the audio. First, the president wasn’t wearing a lavalier microphone, so it appears that all audio was captured by the camcorder. Even for informal videos, this always a mistake, particularly if the subject will be facing away from the camcorder at times, as the President was in this video.

What do you do if the commander in chief refuses to be mic’d? Again, fix it in post. As you see in the waveform that I grabbed from the downloaded Surprise Walk file, minimal processing was performed on the audio in post; it doesn’t even look like the audio was normalized, a serious best practice breach. You can find a useful tutorial on the topic entitled, Boost Your Audio in Adobe Premiere and Audition.

Inadequate levels in this audio file.

Inadequate levels in this audio file.

 

In addition, if the leader of the free world were my boss, before I normalized I would use the rubber band-type controls available in all audio editors to amplify the regions where the President was speaking, and perhaps even try some compression to strengthen his voice. As you can learn in a tutorial you can access here, I typically compress all narration and other spoken words before uploading, because it makes the speaker sound, dare I say, more presidential.

Keep the Background Audio in the Background

I’m a huge San Antonio Spurs fan, and was excited to see a tribute video entitled San Antonio Spurs Tribute — The Beautiful Game, that’s racked up close to 1 million views in just over four days. There’s a lot to like about the video, which mixes Spurs highlights with a moving narration that included luminaries like Magic Johnson, another favorite of mine.

But, while I was watching the video, I kept thinking, “Dude, turn the background audio down.” Then I downloaded the audio file, loaded it into Audition and saw what you can see here. Overall, the levels were peaking at around 0 dB, which I like. However, the background music was peaking at around -12 dB (the bottom red line), leaving only a 12 dB cushion between the narration (the top red line, at 0 dB) and the background music. According to recommendations from the W3C, for maximum legibility, the difference between the background music and the narration should be at least 20 dB.

Only a 12 dB difference between the peaks in the background music file and the peaks of the narration.

Only a 12 dB difference between the peaks in the background music file and the peaks of the narration.

 

The graphic here shows a waveform that follows the W3C’s recommendations. The background music (bottom red line) maxes out at -21 dB, while the speech tops out at 0 dB, providing a 21 dB cushion. You can play the file at the W3C site, and you’ll immediately hear that the audio is much more legible than in the Spurs video.

Here there’s a 21 dB difference between the background music and narration, so the narration is much easier to understand.

Here there’s a 21 dB difference between the background music and narration, so the narration is much easier to understand.

 

A few other points. First, most producers ignore the 20 dB recommendation and choose levels that sounds appropriate to them. That’s okay, but you should test the results on the environment most of your viewers are likely to use. A mix that sounds perfectly clear on your studio headphones will be vastly more muddled on earbuds or $15 speakers. Also remember that you’ve heard the narration 60 times, and could understand it if the background music was cranked to sonic boom levels. Your viewers don’t have that same advantage. Think clarity first (second and third), and then the mood you’re trying to set with the background music.

What’s the Right Target Audio Level?

Finally, let me tackle the appropriate target decibel level for audio uploaded to YouTube or otherwise deployed on the web. I’ll start with a short story. I was consulting with a client in D.C. and the editor in charge of uploading video to the web related that they were having serious issues with audio volume on their web videos. He said that they sounded great in the studio, but remote viewers playing the videos over the Internet complained the audio was too low. He wondered if it was an audio compression issue.

I downloaded one of the compressed files, loaded it into my sound editor, and saw that volumes peaked at -12 dB. I said, “That’s the problem, the volume is too low.” He responded, “I worked in TV for years, and I’ve always set my peaks at -12 dB. It’s perfect and sounded great in the studio.” Interestingly, we were both right.

In the broadcast world, most channels recommend a max volume of -12 dB; everything you watch on the TV is set to these levels. For this reason, audio at -12 dB sounds normal. On the web, virtually all producers target 0 dB, and web viewers are used to this higher volume. My client’s videos, set to -12 dB, had much lower volume than the average video on the web; hence the complaints.

I always normalize my audio to 0 dB before uploading to YouTube or otherwise deploying. As you’ll learn if you watch this video normalization pushes the maximum peak in the audio file to 0 dB, so it never causes distortion. You can argue the technical merits of targeting -12 dB, but your volume will be lower than most other audio on the web, and they’ll suspect that you’re out of step, not the other way around.




Discussion

Comments are disallowed for this post.

  1. I have a situation where I have to provide a 1080i stream for broadcast but want to have a progressive output for live web streaming. I hate using deinterlace since it looks so much worse than our 1080p output on the stuff that doesn’t have to have the 1080i output. Is there any gear that can somehow do both outputs?

    Posted by John | May 27, 2014, 4:29 pm
  2. John:

    Thanks for your note. Have you tried shooting in progressive and exporting interlaced for broadcast?

    What program are you using for deinterlacing? Most that I’ve used do a pretty good job and artifacts are few and far between.

    LMK Jan Ozer

    Posted by Jan Ozer | May 27, 2014, 5:44 pm
  3. Thanks for reply, This is actually for live, the interlacing is happening in a Digital Rapids hardware encoder. The issue isn’t as much interlacing arifacting as just the softness of the final image.

    Posted by John | May 27, 2014, 7:38 pm
  4. contact me off list at jozer@mindspring.com. I’d love to see what you’re talking about and see if we can resolve.

    Jan

    Posted by Jan Ozer | May 28, 2014, 8:09 am
  5. Great tips and article Jan!

    Posted by Stjepan Alaupovic | June 6, 2014, 1:52 pm