Gamma, Linear Color Space and HDR


Table of contents

Part I: Gamma

We’re all computer users whose primary interface for viewing images is the monitor. Everything about monitors, including their aspect ratio, comes from their ancestors: old television sets.

Old television sets were bad - it took a lot of voltage to make them bright. That’s the thing to remember - that images on a monitor are “darker than they should be”, which we’ll need to know later in order to debug what’s going wrong.

Monitors have a gamma of about 2.5, which looks like the graph shown here
Monitor Gamma of about 2.5
Enlarge
Monitor Gamma of about 2.5
. As you can see, with a voltage of 50%, you’re nowhere close to getting 50% brightness.

But when we watch videos, look at pictures, and work on our computer, the images look the way we think they should, don’t they?

For display on television sets, video cameras compensate by producing images that are slightly “brighter”, so the result is a gamma of about 2.2.

On Windows, the sRGB standard (http://www.w3.org/Graphics/Color/sRGB) for the web (any operating system) is used to give you the experience of having a monitor with a gamma of 2.2. In other words, the monitor profile applies a very small gamma correction of about 1.1 to brighten things up a bit. Our eyes do not want a linear response, and it is pleasing to have an extra gamma applied to images. Linear images appear washed out and we have difficulty seeing the important details.

Since the images you usually worki with (photographs taken with a digital camera, for example) look the way you expect them to, they have obviously have been encoded with a gamma correction that compensates for the darkening the monitor’s response.

What kind of encoding is used? Well, the digital camera captures data from its CCD in some device-specific format. It then copies the pixel values directly to the graphic card buffer, assuming that the graphic card is connected to a device that will respond to these values with a gamma curve of about 2.5, producing a jpeg image that is meant to be viewed on a computer monitor. In other words the jpeg images we work with are, like most other digital images, designed and encoded to be seen on a monitor. They are not linear.

However, when you render an image using a CG application, or paint something in PhotoShop (without any color management setup) no such color encoding happens. Shaders, compositing algorithms, paint brushes, and so on simply poke numerical values without considering what perceptual effect it will have when the image is displayed on a monitor. It's all pure and perfect math, and it doesn’t account for the "impure" way that digital cameras reproduce images.

These calculations all happen in linear space, which means that if a value needs to be twice as bright, the CG application multiplies the value by two, regardless of the fact that the result will in fact not be twice as bright when displayed on a monitor.

While 3D rendering done with Gouraud shading is, by definition, completely linear, things get muddier when you apply image textures to your 3D objects.

Part II : Gamma, Sometimes

In Part I we looked at the (now trendy) notion that the pictures we work with are encoded with a gamma curve to compensate for how the monitor works. However 3D renderers and compositing software calculate things in linear space, or to put it simply they just do the math and produce exact results in the simplest and most obvious way. They do not try to correct for the technological problem of monitor construction.

Some people will tell you should convert all of your images to linear before doing anything, and use a look up table to see the result. That statement should come with a big warning about Dealing With Absolutes (only the Sith do!)

Not all software ignores Gamma. On an editing station such as an Avid Media Composer or an Avid DS, the color correction process is in fact aware that video has a gamma of 2.5 and uses that fact to its advantage. The Avid station processes the data in YUV directly, which allows real-time. It also allows for more accuracy than you would get if you converted everything to RGB (especially 8-bit) and did the same color correction in another product. The user interface also accounts for how video data works.

In 3D rendering software, the various shaders are designed to make the result look good, not necessary mathematically correct. The whole solution was designed to give good-looking results in the environment that it was used.

In 2D compositing, clearly a lot of what we’ve been doing is not mathematically correct. For example, blurring an image with a gamma is not the same as blurring that image without the gamma and then applying a gamma for display. However, what effect we trying to achieve when we blur an image? Depth of field? A foggy environment? The blur is a hack with no equivalent in nature, and we’re used to how it looks. Most color correction methods make no sense in nature either. With anti-aliasing of text and shapes, clearly the results are best if they are done in true linear space, however we accept the current ‘incorrect’ results as the de-facto standard.

However let’s say we ignore all of this and decide to take the purist approach and work in linear space.

How to work in Linear Space

Here are the very basic steps of the process

  1. Identify the gamma found in all source images.
  2. Remove it.
  3. Do the processing (3D rendering, compositing).
  4. Put back the gamma for viewing the result (preferably using a color management system).

Identifying the Gamma in Source Images

In order to do this, we need to know where the gamma is coming from. For purposes of this article, we’ll define the following simplified categories:

  • Images coming from a camera - For these, we’ll assume a gamma of between 1.8 and 2.5
  • Artist-painted color images - These are assumed to have a gamma of 2.2. The image looks exactly as the artist intended, when viewed on a monitor.
  • Property maps - I use this term to describe any image you paint in PhotoShop that is to be used as a gradient for a 3D render. These images can generally be said to have no gamma and already be linear. Let’s say you paint a bump map or a wet map, when you paint the value 0.5 in photoshop, you really mean the value 1/2 and no other value. it’s a map of properties, or numbers you mean to send to a formula or shader. It's not an image.
  • Images produced by a CG application - Image produced by a computer should be assumed to be linear.
  • Specific image Formats - Cineon images have a well defined data-format and spec (I’ll discuss this topic in another atricle). OpenEXR and .hdr images are assumed to be linear. PNG files are web images and assumed to have gamma, as specified in the format's spec.

File Format Spec vs. Reality

File formats' specs do not determine what’s in an image file, humans do. How the file was produced is important. For example, if you use mental images' imf_copy utility to convert a .pic file to an OpenEXR file, you have not created a linear floating point image. The data was simply copied from one format to another. imf_copy cannot know if a gamma was encoded in the image, or remove it. Therefore, you may very well end up with an OpenEXR that has a gamma 2.2 encoded in it. As a rule, mental images never attempts to second guess the user or guess what data is suposed to be intepreted. You have to be smarter than the tool.)

More discussion about what to do next in Part III.

Part III : Linear Space In Render

Building on what we’ve established up to now, we can now work in linear floating point space in XSI.

Given that we don’t know the exact gamma curve that is encoded in the images that we work with, we will get an approximation of linear color space, which, like many things in CG, will be “good enough”. We’ll use the broad categories of images that I mentioned in Part II to figure out which gamma value is most likely encoded in a given image.

The first step is to remove the gamma from the image used as texture. We need to remember that monitors make images look darker, therefore if we find a gamma correction control, we need to move it in the direction that makes the image darker (in other words, more linear). For best results we need to do this at the highest bit depth possible. If we work with 8-bit images, we’d lose a lot of data from the image if we were to do the gamma correction in 8-bit, so we will do it in the render tree where the texture has been sampled by Mental Ray and is now a color value in floating point. (The drawback of this is that the same gamma calculation will be calculated many times for the same image pixel, which is a waste of processing power.)

In the render tree, we connect a Color Correction node (from the Nodes > Image Processing menu) to the image node. To remove a gamma of 2.2 from a photoshop texture, the value that needs to be entered is : “1 / 2.2″, which can be entered directly as a formula in the Gamma field.

Now if we draw a render region, the texture looks much darker than it used to, which verifies that the proper gamma (2.2 vs 1/2.2) correction has been applied. In order to see the rendered result correctly on screen, we need to add a display gamma to the render region display. On the Active Effects tab of the View Render Options (render region) property editor (choose Render > Region > Options from the Render toolbar), set the Gamma value to 2.2 and turn off Intensity Clipping which will otherwise prevent modification of the RGB channel.

That’s it: with this setup we convert the textures to linear, let mental ray process in linear, and then display the result with a (very basic) correction for display.

Every time we gamma correct an image we lose data, so we will not set gamma correction in the render pass. Instead, we’ll save the data we have and only at the compositing stage would we add a final display gamma, when we’re ready to export to video.

The OpenEXR and HDR case

OpenEXR files are, by the spec, linear space images, therefore we would not do any specific correction to use them in Mental Ray. In XSI, we should always let mental ray use the images from disk directly and disable Image Effects on the image clip property page. The reason for this, besides memory efficiency, is that the purpose of XSI’s image loader is to show the the texture in OpenGL, in 8-bit RGBA, not replace the mental ray image loader.

In the image clip property page, you’ll find the settings for OpenEXR exposure and gamma correction. The gamma correction is used to add a 2.2 gamma curve to the image so that you can see them properly in the OpenGL viewport.

There is also a check box, off by default, to use that correction in rendering. Normally, if you understand OpenEXR you would never do this, but I added this setting for people who get confused, or who do special things like render a game cutscene.

Use In Render causes the image shader to apply exposure and display gamma. This setting only makes sense if image clip effects are disabled, which should be case. If effects are enabled, mental ray will get the image with the display and exposure already applied, along with the other image effects, so so the gamma will be applied twice. Worse, some image clip effects like Color Correction do not support pixel values that are out-of-range, so the OpenEXR image will have lost some of its quality.

Exposure and Gamma controls for .exr and .hdr images are also found in the Flipbook and FxTree.

The .hdr format can also be assumed to be linear, so the settings also apply to this format. In both cases, the image loader loads a scanline in floating point, applies the gamma and exposure, and converts the result to 8-bit (or 16-bit unsigned short in the fxtree).

Differences between imf_disp, exrdisplay, XSI

imf_disp is mental images' image display tool. By default, it applies no gamma correction, so if you view OpenEXR images with it they will apear dark, as the linear data is directly displayed. Gamma correction is available as a command-line option.

exrdisplay, which ships with the OpenEXR distribution offers some more control for the falloff of the gamma correction. This tool should be seen in the film context for which it is meant: the .exr files are images produced by film scanners. This is raw data, and the controls are used to extract that data and put it in a pleasing display format, like we do with the log to linear conversion of Cineon.

Since XSI is not a film image processing tool, these controls are unecessary; a simple gamma curve is enough for previewing the image in the flipbook or OpenGL viewport, therefore we only have a gamma slider. (Generally, users with more advanced needs are using a color management systems, and are using either an commercial or home-built flipbook.) As discussed above, this gamma slider is found on the image clip property page (for OpenGL texture display), in the Flipbook (for viewing the result of render), and in the File Input node of the FxTree (for quick pre-comp of OpenEXR renders).

If you violate the linear space rendering workflow, you’ll get in trouble usually at the flipbook stage. For example, let’s say you ignore all this linear workflow and simply change your render output from .pic to .exr. The image you’re rendering may not be a ‘true’ linear image - which happens if the textures in the scene were not all linear - and certainly you probably have gamma correction off in the render region. So when you load this image in the flipbook, it will by default assume that it needs to add a Gamma 2.2 to these linear .exr images. Luckily, you can disable the gamma correction in the flipbook and it will remember the setting across session, so you can ignore this issue. However, if you send your images to another user, he might use an image view (such as exrdisplay) that also assumes the images need to be viewed with a gamma correction.

Basically, what’s to be remembered with this tale is that OpenEXR isn’t exactly like the other image formats, because most viewers assume that the images are linear, and that a gamma correction curve needs to be added for proper display. For all other images, like tiff float, image viewers normally do not make any such assumption. Besides this sort of confusion when moving between different products, you’re of course free to put any image in an OpenEXR file, linear or not, as long as you know what your data means.

External Links

This page was last modified 15:12, 25 Jul 2012.
This page has been accessed 227774 times.

© Copyright 2009 Autodesk Inc. All Rights Reserved. Privacy Policy | Legal Notices and Trademarks | Report Piracy