Brightness, Contrast, Color, Tint, and Sharpness: These five controls establish your "Basic Video Levels", and are fundamental to achieving best Picture Quality in your Home Theater. EVERY modern TV will feature these controls in its user-accessible settings -- although perhaps less prominently than in the past, as a whole bunch of other weird (and usually unexplained) settings like "Flesh Tone Correction", or "MPEG Noise Reduction" will also be clamoring for your attention. "Contrast" might be called "Picture" in your particular TV, and "Color" might be called "Saturation" (Hey, Marketing guys LOVE to invent names!), but they are the same controls.
And as I move along in this Blog, I'll undoubtedly talk about each of them -- umm, at some point! We've already covered Brightness in my post on Blacker Than Black Video, and Contrast in my post on Peak Whites Video. And in my post on Extinguishing Torch Mode Settings I alerted you to the sad fact the Factory Default settings for these basic controls in your brand new TV are, almost certainly, flat out WRONG for best quality viewing! So correcting THESE settings is something you'll want to tackle right up front when dialing-in your Home Theater.
But when you begin that task, you may be stymied by the discovery the SAME (apparently) controls are also offered in some or all of your Source devices -- and possibly even in your Audio Video Receiver (AVR)! So, umm, WHICH set of controls should you use? Or should you COMBINE the controls? For example, doing part of the necessary adjustment in each device?
In my post on Peak Whites Video, I revealed the basic Rule of Thumb for this:
Adjust your Picture using ONLY the "Video Level" Controls Found in your TV!
Let's explore this further.
In that post I gave a specific example: Trying to verify "Peak Whites" pixel values are actually making it through to your screen by temporarily lowering the Contrast control a bunch to see. As I pointed out there, you HAVE TO use the Contrast control in your TV to do this check. If instead, you try to use a Contrast control in your Source device, all you've done is tell the Source to darken those pixels -- which means what gets sent out on the HDMI cable are no longer Peak Whites values! They may, indeed, now show on your screen, but that proves nothing as to whether true, Peak Whites values are, or are not, still being clipped.
However, that's not a "real" setting. It's just a temporary setting -- a trick used to do this particular check. Why not use the Contrast control in your Source device (if it offers one) to help adjust your "real" picture settings?
It all has to do with "precision" and "dynamic range" in Video processing. First up is "precision":
In my post on Digital Video, I showed how each pixel in the video stream moving along your HDMI cables is represented by three "component" values; together defining the brightness and color of that pixel. Depending on the video format in use, each component might be an 8-bit value (thus 24 bits total per pixel), or a 10-bit value (thus 30 bits per pixel), or a 12-bit value (thus 36 bits per pixel). However, all the Standard Dynamic Range (SDR) CONTENT you are playing is authored using 8-bits per component! So, umm, where do the extra bits come from if you select to use 10-bit or 12-bit output?
Well they COULD be rounding bits carried out of the Video processing in your Source device. OR, they could be just padding zeroes! Seriously! Padding with zero bits is actually pretty common.
Meanwhile, when your TV receives that Digital Video on its HDMI Input, it MIGHT preserve those additional bits -- passing them along to its internal Video processing. Or it MIGHT simply strip them off under the assumption only the original, 8-bit data is worth preserving!
TECHNICAL NOTE: The same sort of considerations apply to HDR content. All content authored in HDR-10 format for example (the most common type of HDR content at present), uses 10-bits per component.
But what of the video processing INSIDE the TV? Even lower end HDTVs typically offer a video processing path which uses 10-bits per component. Better quality TVs -- including UHD/HDR TVs -- likely have a 14-bit internal video processing path. Now the actual display elements in the TV -- the pixels which light up to produce the visible output of the TV -- will be more restrictive in their precision. The pixels may only have a 10-bit dynamic range for example. But the idea is to carry as much precision as possible throughout the internal video processing steps -- only trimming things down to what the pixels can handle at the very last step when the pixels are made to light up! This minimizes the chance rounding errors in one stage of the processing will propagate through the remaining stages. Or more simply: Process using more bits than you need in the result!
But if you do video processing in your Source device, the precision of the result getting sent out on the HDMI cable may be no more than 8-bit (even if you've selected 10-bit or 12-bit output!) -- and certainly no more than 12-bit. And even if there's real data in the extra bits of a 10-bit or 12-bit HDMI stream, the TV may be ignoring those!
Now when you feed that into a 10-, 12-, or 14-bit video processing engine inside the TV, the result from the video processing in the Source will be well preserved through to the pixels on the screen, but the ROUNDING ERRORS in that Source-based video processing are baked into that result! The extra precision in the TV's video processing engine can NOT correct those. The video has already been damaged -- information has been permanently lost.
This suggests you want your Source device(s) and AVR to do NO video processing at all! Or as little as possible. Let the TV do ALL the work!
From a practical point of view, you can't really achieve "no processing". For example a disc player HAS TO do the decoding of the compressed video format coming off the disc. And it also has to convert that video into a format legal to pass along the HDMI cable, which likely involves "color up-sampling", for example.
It may also be asked to do up-scaling or down-scaling of the Resolution of the video. As it turns out, the math involved in that works equally well for processing happening in the Source because the primary change is in the number of pixels rather than the variation of values within a given pixel.
But video processing related to the Basic Video Levels is an entirely different kettle of fish! The Source (or AVR) can NOT do as good a job as the TV. The problem is the 2nd item I mentioned above: "Dynamic range".
Suppose you decide your picture is lacking in Red. To increase Red in your imagery you might reach for the Tint control in your Source (adjusting the bias of Red vs. Green for a given level of Blue). The problem is, the data format on the HDMI cable already defines a particular value as meaning "As Red as You Can Get (for a given Brightness)!" There is no value the Source can put out on the HDMI cable which is Redder than that.
So instead, the Source has to REDUCE Green and Blue! The result is, indeed, a picture with more Red in it, but you've now compressed the range (the number of steps) between Black and maximal Green and between Black and maximal Blue.
Range compression leads to rounding errors -- which show as "banding".
The same thing happens if you tell the Source to reduce Contrast or raise Brightness. You've compressed the number of steps between Black and Reference White.
What if you tell the Source to RAISE Contrast? Again range limits get in the way. Pixel values which are already very bright get pushed up, but run into the limit of how bright a pixel can be in the video format going out on the HDMI cable. That means you end up "crushing" the higher brightness pixel values together. They become indistinguishable up there.
Don't all these same problems arise if you use the Basic Video Levels controls in the TV? No, or at least, not in a well designed TV. The extra bits of precision in the internal video processing of the TV also allow more "dynamic range" to be preserved through the video processing. Now of course, any TV will still have limits as to how far it can be adjusted without problems. But for these Basic Video Levels settings, the key to minimizing problems is to do the adjustments as close to the pixels you are trying to light up as possible. I.e., in the TV itself.
So the goal when setting up a Source device (or any AVR which offers video processing) is to find the settings which leave the video entirely unmolested as regards the Basic Video Levels.
Fortunately the situation with Sources and AVRs is much MUCH different than with TVs. For Sources and AVRs, the Factory Default settings really are likely to be the settings which minimize such processing of the video.
Some Sources will offer "Picture Modes" -- trying to emulate the sort of folderol found in TVs, apparently -- and for those you will need to figure which Picture Mode minimizes video processing. It is likely to be the Factory Default choice, but it certainly wouldn't hurt to do some Internet research to see what other owners (and professional video calibrators) have discovered for that particular model of Source device.
Presuming you HAVE found the setup in your Source(s) and AVR which minimize processing of the Basic Video Levels, is there EVER any reason to touch those? Sure. Your particular TV may not be able to achieve an ideal picture using only its internal controls. This is more likely to be the case with cheaper TV models. It also may be the case your TV is developing a fault which limits its picture quality. Assuming you can't simply replace the TV, you MAY discover some small adjustments to the Basic Video Levels in your Source(s) or AVR provide a workaround.
If you find you have to make BIG changes to the Basic Video Levels in your Source(s) or AVR to get a satisfactory picture, that almost invariably means you've made a fundamental setup mistake in your TV.
For example, if you've fallen into the Extended/Enhanced Video Trap (see my post on Blacker Than Black Video for details), your first indication will likely be you have to make excessive changes to the Brightness setting to get Black and near-Blacks to look right. There's a similar error going on in the Whites in this case, but that's not as obvious.
The amount of change may be more than you can achieve even with the highest or lowest settings of Brightness in your TV, and so you may be tempted to get the rest of the way there by adding Brightness adjustment in your Source(s) or AVR. But in reality what's going on is you've got the wrong video format in use -- and with a mismatch between what the Source and TV are expecting.
Or to put it another way, a "correct" setup will usually yield an ideal picture with "reasonable" (not extreme) settings of the Basic Video Levels in your TV, and NO changes to the Basic Video Levels in your Source(s) or AVR away from their default values which minimize processing.
In summary, when calibrating the Basic Video Levels for your TV, use calibration content of known "correctness" (i.e., from a Calibration Disc), set your Source -- the disc player in this example -- to do NO alteration to that calibration content, and make all needed adjustments, exclusively, via the Basic Video Level controls found in your TV.
--Bob