Lessons, tips, tricks about photography, in particular digital photography.
Always wondered what are color spaces, ICC profiles, gamma and all about color management? This tutorial is for you: you will learn all important concepts about the theory of colors and you will be ready to understand how to use your monitor at its best.
Each color can be represented in several way, the most common is through its RGB - Red, Green and Blue components: that means that mixing some Red color with some Green and some Blue we can produce all colors: we have to define in advance the range for each component and we can easily write any mix as RGB = (amount of red, amount of green, amount of blue). Components are also called channels. A typical range for each component/channel is a number from 0 (that represents zero amount of that component) to 255 (that represents maximum amount of that component), we have 256 levels for each component and 256 is a range that can be expressed with 8 bit of information (a byte) in a computer.
Let's make some example on how to express some colors with the RGB components:
- Pure red is just Red with no contribution of Green neither of Blue. So we can express pure red as Red = 255 (maximum), Green = 0, Blue = 0 or in a more compact form RGB = (255, 0, 0).
- Yellow can be obtained by mixing a full red with full green, so we can express a pure yellow as RGB = (255, 255, 0).
- Brown is a mix of a good quantity of red plus some green and blue at the same amount: RGB(165,42,42).
So when we say that we are using a "8 bit RGB" representation that simply means we are expressing each color as a three component Red/Green/Blue value and each of them can go from 0 to 255. Curious on how many colors we can represent with 8 bit RGB? 256*256*256 or 16.7 millions colors. Wow, that is a huge number and our human eye is able to distinguish all of them, in reality more than those and we will discover it later on. We can conclude that "8 bit RGB" and "16 million RGB colors" are equivalent.
Can we use more than 256 levels for each component? Yes, if our computer and monitors support them (we will see it in another article about monitor calibration). High end monitor can represent 1024 levels for each RGB channel and to do that they need more than 8 bit per component: they need 10 bit. So a "10 bit RGB" monitor can visualize 1024*1024*1024 or 1 billion colors! And human eye keeps up in distinguishing them.
Besides RGB, are there other color formats? Sure, a lot, we will see the most used ones.
The first is YUV: each color is expressed by the combination of luminance (brightness) as Y whilte U and V are related to the blue and red (they are not the B and R components like in the RGB - below a deep dive for geeks).
Y=0 means no luminance so it is black, Y=1 (conventionally its max value) means maximum brightness that is white, Y going from 0 to max means black becoming dark grey, then light grey till to become white.
UV adds the chromaticity to the black/grey/white contribution of Y giving the possibility of having yellow, red, orange, green, blue and so on; U and V are expressed as numbers between -0.5 and 0.5.
for geeks: how does UV in YUV differ from BR in RGB?
UV are called "projections" of Blue and Red: they modify just the tonality (chromacity) of the color without altering its brightness that is represented by the Y value, they tell the monitor/TV how much to shift the grey towards blue and red preserving the luminance - if you use RGB, each variation of any of the components produces also a luminance variation.
YUV was invented when in 1970s the television signal had to be enhanced from just black&white to colors: the black&white signal was actually what became the Y component, so the UV channels were added for the TV sets that were able to display colors, while B&W TVs just kept using only the Y signal. This choice enabled another important achievement that was widely used since then: if we have limited bandwith for the signals, we keep the Y/luminance at maximum quality since it is the most important and we can reduce the quality and so the bandwidth for UV - we will see less precise colors but overall scene will not be much degraded. So YUV is an analog format, it is not a digital one that is expressed in bits.
Let's make an example with YUV format: we have seen that yellow in RGB representation is (255, 255, 0), in YUV yellow is (Y=0.89, U=-0.5, V=0.08).
Actually a slight variation of YUV was used for TVs, it is called Y'UV where Y' is the luma instead of the luminance, but the basic idea is the same.
for geeks: 1) what is the difference between luma and luminance? 2) Are luminance and brightness are exactly the same concept?
Luma is the luminance with a non linear correction, so that for example a double value of Luma does not correspond to a double level of light, while a double value of Luminance does correspond to a double level of light. In other words Luma comprises a gamma correction - we will explain gamma correction later in this tutorial.
About the second question, technically, Brightness is the subject/individual perception of luminance, so it is not expressed as an absolute value but typically as a percentage of individual perceived maximum.
We have said that YUV/Y'UV are analog formats, but in our digital world we use a popular format that is taken directly from the old YUV: it is the YCbCr/Y'CbCr where Cb and Cr are exactly the blue and red differences in chromaticity as UV in YUV. Which format do MPEG and JPEG use? Y'CbCr! And DVD? Again Y'CbCr, even if someone, not precisely, says it uses YUV, but now you know the difference. Similarly, the YPbPr format is the same and indicates the digital YCbCr format used in analog devices. Just to add some confusion to the terms 😉.
The yellow color written in YCbCr with 8 bits is (227, -127, 10).
Another format is HSL: each color is represented by its Hue (tonality of color from the rainbow colors, expressed as degrees from 0° for red to 360°), Saturation (how much a color is intense) and Luminance (we already know it).
For example yellow in HSL is written as (60°, 100, 50). This format is widely used in Photoshop and photo editing since it allow to change separately the parameters that human perceives, for example if you want to change only the color of an object from yellow to blue preserving its intensity and brightness, you just need to move the slider of Hue from 60° to 240°. Instead, if an object is too dark or bright, you can just change the luminance and you will not change its color tonality.
HSB is the same of HSL where luminance is called Brightness.
Another format, used for advanced applications, is Lab or CIE 1976 L*a*b*. We will examine it in the next chapter about Color Spaces, here we just say that colors are represented with again 3 components: L stands for Lightness (similar to Luminance and ranging from black at 0 to white at 100), a* is color channel ranging from green (value of -120) to red (value of 120), b* is another color channel ranging from blue (-120) to yellow (120).
The last format we learn of is CMYK: this acronym means Cyan Magenta Yellow blacK so it is a way to express a color with 4 components instead of 3. Why? It is used for printing where the process is the reverse with the one we have seen with the light in RGB: the paper is (typically) white that, we already know that, is the mix of equal amount of Red, Green and Blue, so printing on white means subtracting colors from white and not adding. The fourth channel, the black one, is used to cope with specific issues in printing, like the fact that black as a result of mixing other colors would mean to pour a lot of ink to achieve the black, the text is typically black, black ink is less expensive than color ones.
We have seen that when YUV was defined to create the color TV, the basic idea was to express colors separately from the luminance (the black&white brightness signal). We can elaborate on that: human vision is much ore sensitive to brightness variations than color variations, so when we need to save information to be transmitted, so reducing the bandwith or size of our image, we can think of using more precision/information to represent brightness and less for colors.
That idea is called chroma subsampling and has become more and more used, from the color TV to... JPEG format for images and digital video files.
Using the YCbCr format (explained in the Color Formats chapter) and setting the value of 4 for full quality, we can express the chroma subsampling with 3 digits like 4:2:0:
- first digit is the quality/resolution of luminance (Y) that is always the best with the value of 4;
- second digit is the resolution of chroma (Cb and Cr) for horizontal pixels (columns of pixels on the screen), it can be 4 (max quality), 2 (half the resolution horizontally), 1 (one quarter of resolution);
- third digit is the same value of the second digit or it is 0 when color resolution is reduced at half vertically, i.e. the colors of the 2nd row of pixels is copied from the colors of the 1st row, color of 4th row is copied from the 3rd row and so on, so basically we have half the resolution in vertical for the colors, while brightness has its normal full resolution for each single pixel.
Let's see some examples:
- 4:4:4: maximum quality with no chroma subsampling, used in post-production for cinema and in high-end scanners;
- 4:2:2 where horizontally the color resolution is halved, about 30% of size/bandwith is saved with this trick. It is used in some high-end video formats.
- 4:2:0 means that both horizontal and vertically color resolution is halved. It is very widely used: for the PAL signal (European old analog TV signal), in JPEG images, in DVD-Video and Blu-Ray Discs, in AVCHD and VC-1 video formats.
So far it seems quite easy (there are several ways for expressing a color with numbers, but things are not so simple in reality since we can wonder: how does maximum level of a component look in the real world? What is "pure red"? We can get different "version" of the pure red on a budget monitor, on a high-end, on a TV; it is said that RGB is a color model that is device-dependent. So we need to define precisely, how the components look and that determines also all their combinations. In more technical words, we have to define the color space of the device showing the colors, the range of the colors that a specific device can show with RGB or any other format.
But if every device has its own color space, how is it possible to objectively know how colors must or will be displayed? We need a way to define them in a precise, measurable way.
Here comes the need for absolute color spaces: these are non ambiguous color spaces where colors are precisely defined and standardized. Other color spaces need to provide formulas for linking/converting colors to and from an absolute color space so that there is no uncertainty on what is being shown. Typically this conversion is done through a ICC Profile, that contains formulas just for converting colors from a color space to another. ICC Profiles on our computers are files with extension .icc or .icm (on Windows).
Before looking at absolute color spaces, we need to introduce the two last concepts: white point and gamma.
In each color space the first element to be defined is the white point, that is how the "pure white" looks in a specific color space. This is very important and in photography one the typical correction of the pictures is fixing the White Balance, for example to remove the yellow cast of a shoot under an incandescent light or a blue cast of a shoot in the shadow.
Infact, the "cast" is determinated by the illuminant, the "color" of the ambient light. The typical way of measuring the "chromacity" of a light source radiated by a hot body, like a bulb (yellow) or a gas flame (blue), is its color temperature that is measured in Kelvin degrees. Color temperatures over 5000 K are cool sources of lights (like in a cloudy day on in the shadow), while below 5000 K and especially 3000K are warm light sources (like bulb or in a sunset).
The CIE - International Commission on Illumination has defined various types of typical illuminants so that corresponding standard white points are defined.
- illuminant A to represent domestic tungsten-filament lighting, with a color temperature of 2856 K, so very yellow.
- illuminant B and C to represent noon sunlight respectively at intermediate and at north latitudes, at 4874 K and 6774 K, but they are obsolete and no more used.
- Illuminant D: this is the more used and there are several subtypes:
- D50: horizon light, 5003 K.
- D55: mid morning or mid afternoon light, 5503 K.
- D65: noon daylight, 6504 K so a little more blueish than D55/D50. It is the most common white point, used in popular RGB color spaces and in television.
- Illuminant E: it's a theoretical light with constant power for each color component, for how it is defined it is not exactly measurable with color temperature but its appearance is very close to D55.
The last important concept is the gamma correction: for the luminance it is a way to alter how dark and light areas are rendered and the reference value is 1. More precisely, processing an image with:
- a gamma lower than 1: it will reduce the differences between dark tones so that they look almost similar and darker, while it will expand the differences between light grey tones to better separate them - this is typically useful for a blown out sky to recover some of the clouds.
- a gamma greater than 1: it will expand the differences between dark tones so that shadows will show up, typically used for recovering too dark areas, at the same time it will reduce the differences between light grey.
Let's see in practice with an example
Now let's see what happers processing the image with a low gamma 0.2:
The sky (light tones) is much better with recovered clouds, but darks areas become indistinguishable.
On the other side, let's process the original image with a gamma higher than 1 (we use 2.2 here):
Shadows reveal details before unseenable, but the sky (light tones) gets a bit washed out. Note that raising gamma is not like raising luminosity: if we would have increased luminosity by 70% instead of gamma to 2.2, all tones would become equally brighter with a much worse result in the bright areas like the sky:
Most common color spaces
Now that we know what color space, white point and gamma are, let's give a look at most used absolute color spaces, comparing the width of their color ranges, called color gamut.
- The most obvious color space is the one that match the range of colors of the (average) human eye, that is very wide especially in some colors of the nature like tones of green. The color space that represents human vision has been measured, it's a "theoretical" model and is called CIE 1931 XYZ or simply CIE XYZ. It derives its name by the International Commission on Illumination (CIE is the original name in French "Commission Internationale de l'Èclairage") that in 1931 defined this model with mathematical functions to represents how the cones and rods on the eye retina are stimulated. Colors in the CIE XYZ color space are not represented with RGB values but with three components XYZ: X is similar to the sensitivity of the eye cones to the Red color, Y is the overall luminance while Z is very similar to the sensitivity to the Blue color. As seen in the previous chapter, Y tells us how a color is bright/dark while X and Z together tell us the chromaticity. The used white point in this space is E.
- The most used absolute color space is sRBG. Here colors are represented in the usual RGB format and white point is D65. It was defined in 1990 and later refined in 1996. Color fidelity of consumer computer screens is typically measured against the sRGB color space.
- Adobe RGB is another color space based on RGB format, specified in 1998 and includes a wider gamut than sRGB, especially in the greens where the human eye is especially sensitive (all the tonalities of grass, trees, plants...). Just professional monitors can display all Adobe RGB color range. White point is again D65.
- ProPhoto RGB: a professional color space that is even wider than AdobeRGB and is the closest to the human vision range, again based on RGB primary components. It was defined in 2006 and refined in 2013. No consumer monitor can display such a wide color space, that covers over 90% of the CIE 1976 LAB, but it is often used, for example in Adobe Photoshop and it is the default in Lightroom for image development, since it does not introduce any loss in color fidelity during elaborations. Here the white point is D50.
- CIE 1976 L*a*b* is another color space (we have seen it in previous chapter) that is absolute if the white point is specified. This gamut is extremely wide, so that mathematically it can express colors that are outside of human perception, in that sense they are no more "colors" 😉. This color space is used in many photography software like Adobe Photoshop (Lab mode), Affinity Photo (Lab), RawTherapee and also in... PDF.
- DCI-P3: similar in range to Adobe RGB but with the white point D63 and slightly shifted to green, defined in 2010 by Digital Cinema Initiatives (DCI) so used for movies. The color gamut is 25% larger than sRGB. High quality monitors and high end smartphones can cover most of DCI-P3 gamut.
- Display P3: it is the Apple version of DCI-P3, defined in 2011, with two differences to DCI-P3: the white point is D65 and the gamma of sRGB.
- UHDTV: defined in 2012 and refined in 2016 for High Definition and Ultra HD TV (4k).
- YCbCr and HSL (they are color spaces besides being formats as seen in previous chapter);
- many flavors of CMYK that are device (printer) specific;
- xvYCC for video.
We have understood that it is fundamental to work in a precise color space to represent colors as accurate as possible, otherwise "full red" or "half blue" are colors depending on the illumination and the device that is showing those colors.
But we can imagine that no device is perfect, so there is no monitor, even a professional one, that is able to perfectly and constantly over time display an absolute color space. Each monitor has its specific behavior, for example considering the professional Adobe RGB, it will display yellows a bit more intense or green a tiny bit more blueish. Put it simply, each device has its own native color space, surely not exactly one of the absolute ones. How to cope with that?
Here ICC profiles, standardized by the International Colour Consortium - ICC, come to help: these profiles allow to determine and compensate in various ways the effect of device imperfections, for example how to translate an absolute color space into a monitor one and how to deal with its limitations, so that our computer through the profile knows how to "tune" colors to be sent to the monitor in order to display them in the best possible way. It will not be perfect since some tolerance always exists and since the monitor must be physically able to display almost all the colors in the chosen color space.
In practice, for each monitor (each sample, not each type of monitor since monitors of the same type are not exactly behaving the same) we need a specific ICC profile that describes the relation/conversion between the monitor color space and a reference absolute color space that is called Profile Connection Space - PCS by ICC: this PCS is either CIE LAB or CIE XYS, and we know these guys already! An ICC profile is stored inside files with extension icc or .icm (only for Windows).
Just for the sake of completeness, there are also another type of ICC profiles that describes the conversion between two specific color spaces, for example to convert from the color space of two printers.
Since the specific color space to be translated from the PCS is typically narrower than the PCS that is a wide gamut color space, a problem arises: how to deal with colors that are included in PCS but not in the specific color space? For example, if a certain monitor cannot physically display greens beyod a certain tonality that is instead included in the PCS and is seeable by the human eye, which color we will use on that monitor to display that "undisplayable" color...?
Here comes the concept of the rendering intent, put it simply, when dealing with limitations, the intent indicates how to compromise on that situation. We have basically 4 possible intents as follows:
- Absolute colorimetric: color are displayed in the output (a monitor or a printer) without modifications from the source space, no conversion of white point and no modification of saturation except for out-of-gamut colors that are represented with colors at the boundary of the gamut (i.e a very deep blue that cannot be physically reproduced by an ink printer will be shown with a blue with components out of range simply clipped). This intent displays exactly the possible (in-gamut) colors, but weirdly the impossible (out-of-gamut) colors. The more our device has a narrow gamut or an unsual white point, the worst results we get with this intent, while with a very wide output gamut with a good white point results will be perfect.
- Relative colorimetric: like the previous but there is the conversion of white points, out-of-gamut colors are again moved to the nearest color within gamut. The white point conversion can induce some modification of saturation, but much less than the following intent.
- Perceptual: here the goal is the preserve the "pleasant appearance" of the overall image, so out-of-range colors are displayed at the boundaries of the range and the other in-gamut colors are "compressed", tipically desaturated. The idea is to preserve the relative difference between all the colors, including the out-of-gamut. In this way color transitions are well preserved and there is no "banding". The downside is that (in-gamut) colors are shown less vivid than how they should be since they are "squeezed" as saturation.
- Saturation: similar to the previous but with the secondary goal of not reducing saturation too much (when popping colors are needed) in the color compression to accomodate for accepting out-of-gamut colors; if a color would be too desaturated with the perceptual intent, with the saturation intent the compression is limited at the expense of slightly altering its hue (tone).
The most commonly used rendering intents are Relative Colorimetric and Perceptual and every ICC is built with a specific rendering intent. When in doubt, choosing the Perceptual is the safest way.
A brief recap of what we've seen in this tutorial:
- there are several ways to translate a color into a formal ways, as numbers representing its components according to different models. So we have different formats in which we can express colors, with more known being RGB, YUV/YCbCr, HSL/HSB, CIELAB, CMYK.
- a way of saving bandwith or filesize when transmitting / storing colors is reducing the resolution of tonality components, while preserving maximum resolution for the brightness components; this is called chroma subsampling and it is widely used in JPEG, DVD, Blu-Ray and video formats.
- to identify how a color look in an objective way and all possible colors by a device (a monitor, a printer) we need to define color spaces, and in particular absolute color spaces that have act as an objective reference. Color spaces are characterized by gamut (how large the color space is), gamma (the non linear transformation between values and their appearance) and white point. Most common absolute color spaces are sRGB, Adobe RGB, ProPhoto RGB, DCI-P3.
- to convert from a color space into another one, ICC profiles are used, especially needed for monitor calibration. Since conversion between color spaces cannot be perfect because of limitations of devices, there are several ways to deal with these limitations, so we have the rendering intents, of which the most used is the perceptual.
If you want more clarification, do not hesitate to comment below.
Using a tripod is something not very common, but there are situations where without a tripod it it is impossible to get a great shot, tipically:
- when shooting with a long telephoto (let's say from 400 mm on), especially if it is windy;
- when taking long exposures, after sunset or in the night for star shooting or to make special effects like water flow/waves;
- when shooting remotely, for example to catch animals passing;
- when taking multiple shots with exact same framing, like for timelapse.
Stacking is a useful and common feature in most of advanced photo catalog/archiving software and consists in grouping multiple photos showing just one of them (the one that is put on top of the stack).
It is useful when you have multiple shots of the same subject and want to compact the display.
Each software have its own way of handling stacking. This article describes in detail how Adobe Lightroom handles stacks, that is not so intuitive (you'll see that depending on the function, the stack can be treated in a way or in another).
"How to focus properly?", "What does DOF mean?", "Is auto-focus precise?", "What does back-focus and front-focus mean"?
Focusing is not as easy as it could seem, but it's one of the most important technique in photography, since a dark or too bright photo can be corrected via software, while an out of focus photo very hardly if not at all.
This lesson presents the basic principles of focus that are useful to be known.
Today I'm going to talk about hot/dead pixels, those annoying pixels that show up typically on long exposures (starting from around 1 second), one the unwanted effect under the "digital noise" for long exposures and high ISO; you can see below in the image that blue of the sky is not a even color and it seems grainy, that is the digital noise.