16 bit color management question

B
Posted By
birdman
Oct 23, 2004
Views
428
Replies
13
Status
Closed
Can anyone clarify this issue with facts?
I believe that monitor calibration software is programmed for 8 bit color, because the purpose is to translate monitor color gamut to printing color gamut and printing is an 8 bit process.
That would suggest that when you work in 16 bit color your monitor is not correctly calibrated.
When your 16 bit image is processed for printing it takes two more uncalibrated software hits.
The first hit converts the image to 8 bit color.
The second hit converts the image to the gamut of a particular printer. When the 16 bit image is converted down to 8 bits many users feel there is a loss of color range or tonality that they see on their monitor. That 8 bit monitor image is the one that profiles will use to convert for printing, If one is then going to have readjust this 8 bit image to achieve calibrated,
i.e. predictable printed results then:
why use 16 bit color in the first place?

Master Retouching Hair

Learn how to rescue details, remove flyaways, add volume, and enhance the definition of hair in any photo. We break down every tool and technique in Photoshop to get picture-perfect hair, every time.

N
nomail
Oct 23, 2004
bmoag wrote:

Can anyone clarify this issue with facts?
I believe that monitor calibration software is programmed for 8 bit color, because the purpose is to translate monitor color gamut to printing color gamut and printing is an 8 bit process.

No, printing has nothing to do with it. Your monitor is an 8 bits device, so its calibrated as such.

That would suggest that when you work in 16 bit color your monitor is not correctly calibrated.

Your monitor is correctly calibrated, but what you see is an 8 bits representation of your 16 bits image.

When your 16 bit image is processed for printing it takes two more uncalibrated software hits.

What on earth is an ‘uncalibrated software hit’?

The first hit converts the image to 8 bit color.

Converting from 16 bits to 8 bits has nothing to do with calibration.

The second hit converts the image to the gamut of a particular printer. When the 16 bit image is converted down to 8 bits many users feel there is a loss of color range or tonality that they see on their monitor.

I doubt anyone can see the difference, let alone on a monitor.

That 8 bit monitor image is the one that profiles will use to convert for printing,

No, your monitor image is not used for printing, the real image is used.

If one is then going to have readjust this 8 bit image to achieve calibrated, i.e. predictable printed results then:
why use 16 bit color in the first place?

16 bits is used for color and density corrections. After that, you convert to 8 bits. That’s what you use to print.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
MR
Mike Russell
Oct 23, 2004
bmoag wrote:
Can anyone clarify this issue with facts?

That has been known to happen, on rare occasion, in this group. 🙂

I believe that monitor calibration software is programmed for 8 bit color, because the purpose is to translate monitor color gamut to printing color gamut and printing is an 8 bit process.

Color profiles are not associated with any particular bit depth, and can convert 16 bit (aka hibit) images as well as 8 bit. That said, yes, a monitor profile, and the software that creates that profile, is limited to processing the bit depth of the video card. The resulting profile is in no sense an 8 bit profile, any more than a curve created in 8 bits is an 8 bit curve. It is based on a large number of interpolated brightness values will accurately convert hibit data.

That would suggest that when you work in 16 bit color your monitor is not correctly calibrated.

Yout monitor is displaying a calibrated 24 bit version of your hibit data.

When your 16 bit image is processed for printing it takes two more uncalibrated software hits.
The first hit converts the image to 8 bit color.
The second hit converts the image to the gamut of a particular printer.

The printed data is not based on the monitor image data, but on your original image data. So there are two hits, so to speak, but these are separate from , and not in addition to the display conversion.

You may avoid one of the two "hits" by applying the printer profile manually in PhotoShop before dropping down to 8 bits, and specifying "Same as Source" for the color management. My guess is you will see no difference in the final output.

When the 16 bit image is converted down to 8 bits many users feel there is a loss of color range or tonality that they see on their monitor.

The feeling exists, but is it founded in fact or fancy? You are limited by your display bit depth, and whether the profile is applied to 16 bit or 8 bit data would seem unimportant. To my knowledge no onr has supplied a screen capture demonstrating this "loss of tonality". If there is such a loss of tonality in prints, it would probably show up first in dark blue shadow detail, where printers have a substantial edge over monitors.

BTW – the gamuts of todays inkjet printers are typically quite large – somewhere between Adobe RGB and sRGB. If you’d like to compare gamuts for yourself, check out the free LabMeter download at the Curvemeister site: http://www.curvemeister.com/downloads/index.html

That 8 bit monitor image is the one that profiles will
use to convert for printing.

The printed image will always be based on your image data, not on what happens to be on your monitor. For example, you may use Photoshop in 256 color mode, and your printed image will be the same as if you had used a 24 bit monitor.

If one is then going to have readjust
this 8 bit image to achieve calibrated, i.e. predictable printed results then: why use 16 bit color in the first place?

Even if your final output is 8 bit, the "conventional wisdom" says that you may manipulate your images more freely in 16 bit, confident that any mathmatical roundoff that occurs will be limited as much as possible. This issue is more relevant these days because hibit capture devices are common – what is still missing are hibit printers.

I believe you have apparently found an addional unnecessary source of roundoff error, and perhaps there are those who will use the additional procedure I described to eliminate it.

IMHO this "error" is of theoretical interest only, but there are people, perhaps including yourself, who begrudge any source of inaccuracy that might impact their images. Photograhers have historically been very particular about any issue affecting image accuracy and permanence.

BTW your question caught my eye because there is an additional, unnecessary LUT quantization occurs in Photoshop’s curves function when the composite RGB or CMYK curve is used. A small difference, but it is there. Yes, I fixed it in Curvemeister.


Mike Russell
www.curvemeister.com
www.geigy.2y.net
B4
Bob 4
Oct 24, 2004
There is no valid use for 16 bit color in our 8 bit world. It’s just a gimmick from Adobe to sell the next upgrade
program. All your hardware is 8 bit, and you can’t
actually see the difference on your 8 bit monitor, only
the Adobe cash registers can see the difference it brings.

"bmoag" wrote in message
Can anyone clarify this issue with facts?
I believe that monitor calibration software is programmed for 8 bit color, because the purpose is to translate monitor color gamut to printing color gamut and printing is an 8 bit process.
That would suggest that when you work in 16 bit color your monitor is not correctly calibrated.
When your 16 bit image is processed for printing it takes two more uncalibrated software hits.
The first hit converts the image to 8 bit color.
The second hit converts the image to the gamut of a particular printer. When the 16 bit image is converted down to 8 bits many users feel there is a loss of color range or tonality that they see on their monitor. That 8 bit monitor image is the one that profiles will use to convert for printing, If one is then going to have readjust this 8 bit image to achieve calibrated, i.e. predictable printed results then: why use 16 bit color in the first place?

MJ
Monty Jake Monty
Oct 24, 2004
Except when using levels or curve adjustments you do much less damage to the histogram in 16 bit mode. Worth it to switch to 16 bit to do your basic adjustments and then switch back to 8 bit.

Steve

— faith \’fath\ n : firm belief in something for which there is no proof. Webster’s Dictionary

From: "Bob 4"
Organization: SBC http://yahoo.sbc.com
Newsgroups: alt.graphics.photoshop
Date: Sun, 24 Oct 2004 00:21:29 GMT
Subject: Re: 16 bit color management question

There is no valid use for 16 bit color in our 8 bit world. It’s just a gimmick from Adobe to sell the next upgrade
program. All your hardware is 8 bit, and you can’t
actually see the difference on your 8 bit monitor, only
the Adobe cash registers can see the difference it brings.

"bmoag" wrote in message
Can anyone clarify this issue with facts?
I believe that monitor calibration software is programmed for 8 bit color, because the purpose is to translate monitor color gamut to printing color gamut and printing is an 8 bit process.
That would suggest that when you work in 16 bit color your monitor is not correctly calibrated.
When your 16 bit image is processed for printing it takes two more uncalibrated software hits.
The first hit converts the image to 8 bit color.
The second hit converts the image to the gamut of a particular printer. When the 16 bit image is converted down to 8 bits many users feel there is a loss of color range or tonality that they see on their monitor. That 8 bit monitor image is the one that profiles will use to convert for printing, If one is then going to have readjust this 8 bit image to achieve calibrated, i.e. predictable printed results then: why use 16 bit color in the first place?

MR
Mike Russell
Oct 24, 2004
Monty Jake Monty wrote:
Except when using levels or curve adjustments you do much less damage to the histogram in 16 bit mode. Worth it to switch to 16 bit to do your basic adjustments and then switch back to 8 bit.

The OP’s question was where is the logic in working in 16 bits, if the print profile conversion fumbles the bottom bit of your 8 bit printout?

It’s a good question.


Mike Russell
www.curvemeister.com
www.geigy.2y.net
MJ
Monty Jake Monty
Oct 24, 2004
I was responding to to Bob’s statement. See below.

"There is no valid use for 16 bit color in our 8 bit world. It’s just a gimmick from Adobe to sell the next upgrade
program. All your hardware is 8 bit, and you can’t
actually see the difference on your 8 bit monitor, only
the Adobe cash registers can see the difference it brings."

Steve

— faith \’fath\ n : firm belief in something for which there is no proof. Webster’s Dictionary

From: "Mike Russell"
Organization: SBC http://yahoo.sbc.com
Newsgroups: alt.graphics.photoshop
Date: Sun, 24 Oct 2004 02:17:17 GMT
Subject: Re: 16 bit color management question

Monty Jake Monty wrote:
Except when using levels or curve adjustments you do much less damage to the histogram in 16 bit mode. Worth it to switch to 16 bit to do your basic adjustments and then switch back to 8 bit.

The OP’s question was where is the logic in working in 16 bits, if the print profile conversion fumbles the bottom bit of your 8 bit printout?
It’s a good question.


Mike Russell
www.curvemeister.com
www.geigy.2y.net

RF
Robert Feinman
Oct 24, 2004
In article <03wed.17540$>,
says…
Can anyone clarify this issue with facts?
I believe that monitor calibration software is programmed for 8 bit color, because the purpose is to translate monitor color gamut to printing color gamut and printing is an 8 bit process.
That would suggest that when you work in 16 bit color your monitor is not correctly calibrated.
When your 16 bit image is processed for printing it takes two more uncalibrated software hits.
The first hit converts the image to 8 bit color.
The second hit converts the image to the gamut of a particular printer. When the 16 bit image is converted down to 8 bits many users feel there is a loss of color range or tonality that they see on their monitor. That 8 bit monitor image is the one that profiles will use to convert for printing, If one is then going to have readjust this 8 bit image to achieve calibrated,
i.e. predictable printed results then:
why use 16 bit color in the first place?
I have a discussion of using 16 vs 8 bits on my web site. You can judge for yourself if it makes a difference. From my experience there is a slight improvement in the smoothness of gradation in the darkest tones if your contrast and brightness corrections are done on a 16 bit images, but others claim not to see it.
Follow the tips link on my home page, if you are interested. —
Robert D Feinman
Landscapes, Cityscapes and Panoramic Photographs
http://robertdfeinman.com
mail:
R
Rasmus
Oct 29, 2004
If you have 16 bit photos, for example from a scanner.
If you correct your images in ps, you
MR
Mike Russell
Oct 29, 2004
Ignore the histogram and keep your eyes on the prize, the final image. —
Mike Russell
www.curvemeister.com
www.geigy.2y.net

Rasmus wrote:
If you have 16 bit photos, for example from a scanner.
If you correct your images in ps, you
TA
Timo Autiokari
Oct 29, 2004
"bmoag" wrote:

I believe that monitor calibration software is programmed for 8 bit color,

Yes they are, because that is the way the hardware is built (both PC and Mac), just 8-bit/c path to the display.

because the purpose is to translate monitor color gamut to printing color gamut and printing is an 8 bit process.

No, the bit-depth of the data only define the gradation, that is: Then end points of the range do not change at all. The end points define the gamut and dynamic range.

That would suggest that when you work in 16 bit color your monitor is not correctly calibrated.

Is correctly calibrated.

You do not see the benefit from the better gradation on the CRT, until you 1) do one strong modifications to the data or 2) you do many small modifications to the data. The latter case is very difficult to notice since the vision has the tendency of adapting to the gradually decreasing quality, here is an example of that: Go to:
http://www.aim-dtp.net/aim/evaluation/gamma_error/processing _space.htm and scroll down, then do the below:

1 View the ‘Jpeg copy of the original’ image that is there as the default image for a short time like 5 sec.

2) click the ‘Linear editing’ option selector, you’ll see a large change in the image appearance, view that image also for 5 sec.

3) then click next option selector, the ‘Gamma-space 1.2 editing’ option, notice that there is just a very tiny change to the image appearance. View also that image say for 5 sec.

4) Repeat the step 3 subsequently for all the remaining 5 ‘Gamma-space nn editing’ options. After each of them you’ll notice only a very small change to the image appearance if any change at all.

Finally to see how large the difference really is slowly toggle between the option selectors ‘Linear editing’ and ‘Gamma-space 2.2 editing’.

When your 16 bit image is processed for printing it takes two more uncalibrated software hits.
The first hit converts the image to 8 bit color.
The second hit converts the image to the gamut of a particular printer.

The conversion down to 8-bit/c keeps the color-management, only the gradation will be more coarse.

You can print from 16-bit/c space from Photoshop. I have not inspected this but I hope Photoshop will first do the conversion to the printers space and only after that the truncation/rouding to the 8-bit/c. But you can always do like I do all the time (I print using an online photo finisher that only takes 8-bit/c data):

1) Make a duplicate (just in case)
2) Convert it manually to the printers profile (this also gives the benefit that you have all the ICC options available)
3) then convert to 8-bit/c and send directly to the printer (not using the color-managment since the image data of that duplicate is now already color-managed for the printer).
4) Discard the dupe.

When the 16 bit image is converted down to 8 bits many users feel there is a loss of color range or tonality that they see on their monitor.

There sometimes is a very weakly discernible change, I believe Photoshop is doing some dithering also for the display path.

That 8 bit monitor image is the one that profiles will use to convert for printing,

Profiles do nothing they just sit there on your HDD and the software that you are using only read the profiles. It depends on the SW in what order the conversions happen. There is no technical reason why it would not be done as: 1) convert to printers space and then 2) down to 8-bit/c.

why use 16 bit color in the first place?

There is always benefit to work in 16-bit/c space. nearly every editing operation will add round-off errors. So these are +/- 0.5 LSB (+/- half of the smallest level). In Photoshop the smallest level of the 16-bit/c space is only 1/128 of that what it is in the 8-bit/c space. So the more you edit the more benefit you get in working in the 16-bit/c space.

ICC conversions are usually rather harsh for the data, especially those by use a coarse 3D look-up table profilers. So there is benefit from 16-bit/c for printing and for almost what ever that is on your imaging workflow.

Timo Autiokari http://www.aim-dtp.net
B
bagal
Oct 29, 2004
Purty good stuff Timo

Does this run along the lines of "do not confuse noise with image fidelity?"

Aerticus

"Timo Autiokari" wrote in message
"bmoag" wrote:

I believe that monitor calibration software is programmed for 8 bit color,

Yes they are, because that is the way the hardware is built (both PC and Mac), just 8-bit/c path to the display.

because the purpose is to translate monitor color gamut to printing color gamut and printing is an 8 bit process.

No, the bit-depth of the data only define the gradation, that is: Then end points of the range do not change at all. The end points define the gamut and dynamic range.

That would suggest that when you work in 16 bit color your monitor is not correctly calibrated.

Is correctly calibrated.

You do not see the benefit from the better gradation on the CRT, until you 1) do one strong modifications to the data or 2) you do many small modifications to the data. The latter case is very difficult to notice since the vision has the tendency of adapting to the gradually decreasing quality, here is an example of that: Go to:
http://www.aim-dtp.net/aim/evaluation/gamma_error/processing _space.htm and scroll down, then do the below:

1 View the ‘Jpeg copy of the original’ image that is there as the default image for a short time like 5 sec.

2) click the ‘Linear editing’ option selector, you’ll see a large change in the image appearance, view that image also for 5 sec.
3) then click next option selector, the ‘Gamma-space 1.2 editing’ option, notice that there is just a very tiny change to the image appearance. View also that image say for 5 sec.

4) Repeat the step 3 subsequently for all the remaining 5 ‘Gamma-space nn editing’ options. After each of them you’ll notice only a very small change to the image appearance if any change at all.
Finally to see how large the difference really is slowly toggle between the option selectors ‘Linear editing’ and ‘Gamma-space 2.2 editing’.

When your 16 bit image is processed for printing it takes two more uncalibrated software hits.
The first hit converts the image to 8 bit color.
The second hit converts the image to the gamut of a particular printer.

The conversion down to 8-bit/c keeps the color-management, only the gradation will be more coarse.

You can print from 16-bit/c space from Photoshop. I have not inspected this but I hope Photoshop will first do the conversion to the printers space and only after that the truncation/rouding to the 8-bit/c. But you can always do like I do all the time (I print using an online photo finisher that only takes 8-bit/c data):

1) Make a duplicate (just in case)
2) Convert it manually to the printers profile (this also gives the benefit that you have all the ICC options available)
3) then convert to 8-bit/c and send directly to the printer (not using the color-managment since the image data of that duplicate is now already color-managed for the printer).
4) Discard the dupe.

When the 16 bit image is converted down to 8 bits many users feel there is a loss of color range or tonality that they see on their monitor.

There sometimes is a very weakly discernible change, I believe Photoshop is doing some dithering also for the display path.
That 8 bit monitor image is the one that profiles will use to convert for printing,

Profiles do nothing they just sit there on your HDD and the software that you are using only read the profiles. It depends on the SW in what order the conversions happen. There is no technical reason why it would not be done as: 1) convert to printers space and then 2) down to 8-bit/c.

why use 16 bit color in the first place?

There is always benefit to work in 16-bit/c space. nearly every editing operation will add round-off errors. So these are +/- 0.5 LSB (+/- half of the smallest level). In Photoshop the smallest level of the 16-bit/c space is only 1/128 of that what it is in the 8-bit/c space. So the more you edit the more benefit you get in working in the 16-bit/c space.

ICC conversions are usually rather harsh for the data, especially those by use a coarse 3D look-up table profilers. So there is benefit from 16-bit/c for printing and for almost what ever that is on your imaging workflow.

Timo Autiokari http://www.aim-dtp.net
W
Waldo
Oct 30, 2004
Bob 4 wrote:
There is no valid use for 16 bit color in our 8 bit world. It’s just a gimmick from Adobe to sell the next upgrade
program. All your hardware is 8 bit, and you can’t
actually see the difference on your 8 bit monitor, only
the Adobe cash registers can see the difference it brings.

For web purposes, you’re absolutely right, but for scanning and digital photography your remark doesn’t make any sense.

Shooting in RAW with a digicam has several advantages. With 10 or 12 bits per channel, you’re able to extract the details of parts that are too dark (or too light).

The same story for scanning e.g. slides.

Waldo
TA
Timo Autiokari
Nov 2, 2004
"Aerticus" wrote:

Does this run along the lines of "do not confuse noise with image fidelity?"

Indeed that is one way to put it! When I noticed this, several years ago, I decided that I will never release any of my post-processing work until I have double checked them at least a couple of hours later but preferably the next day. It is _amazingly_ easy to adapt to very low image quality during the post-processing session… one can absolutely think like "this is going to be a master-piece" or "oh does it look good" and in fact it is total crap. I have found that it also helps a lot if I look/assess high quality prints and/or high quality images on the CRT during the post-processing every now and then.

Timo Autiokari http://www.aim-dtp.net

How to Master Sharpening in Photoshop

Give your photos a professional finish with sharpening in Photoshop. Learn to enhance details, create contrast, and prepare your images for print, web, and social media.

Related Discussion Topics

Nice and short text about related topics in discussion sections