16- to 8-bit Mode Conversion

E
Posted By
East75th
Jan 26, 2004
Views
777
Replies
15
Status
Closed
I use PS7 on a PC. I’m having a problem with 16- to 8-bit conversion and wonder if any other PC users have noticed it also.
Try this:
1. Open a new 256×256 RGB image
2. Define a gray, e.g. <9, 9, 9>
3. Using the paint bucket, fill the image with the gray
4. Display the histogram. You should see a single straight line for each of the 4 charts with zero standard deviation
5. Convert to 16-bits, i.e. Image, Mode, 16-bits
6. Convert to 8-bits, i.e. Image, Mode, 8-bits
What results on my PC is that 237 of the pixels convert to <10,10,10>. The deviate pixels are located randomly, with the first occurring at 0, 221. I wonder if I’m seeing an “undocumented feature” of PS. If this isn’t replicable it’s probably a hardware bug on my machine.

I noticed this initially in a 16-bit image with large areas of uniform color. When I convert it to 8-bits, however, the incorrectly converted pixels are noticeable as speckles. On my PC this paradoxically means that using 16-bits means a loss of accuracy.

Dane

How to Master Sharpening in Photoshop

Give your photos a professional finish with sharpening in Photoshop. Learn to enhance details, create contrast, and prepare your images for print, web, and social media.

MR
Mike Russell
Jan 27, 2004
East75th wrote:
I use PS7 on a PC. I’m having a problem with 16- to 8-bit conversion and wonder if any other PC users have noticed it also.
Try this:
1. Open a new 256×256 RGB image
2. Define a gray, e.g. <9, 9, 9>
3. Using the paint bucket, fill the image with the gray
4. Display the histogram. You should see a single straight line for each of the 4 charts with zero standard deviation
5. Convert to 16-bits, i.e. Image, Mode, 16-bits
6. Convert to 8-bits, i.e. Image, Mode, 8-bits
What results on my PC is that 237 of the pixels convert to <10,10,10>. The deviate pixels are located randomly, with the first occurring at 0, 221. I wonder if I’m seeing an "undocumented feature" of PS. If this isn’t replicable it’s probably a hardware bug on my machine.
I noticed this initially in a 16-bit image with large areas of uniform color. When I convert it to 8-bits, however, the incorrectly converted pixels are noticeable as speckles. On my PC this paradoxically means that using 16-bits means a loss of accuracy.

By default Photoshop adds noise to the image when converting from 16 bits to 8 bits. There is logic behind this. It prevents banding in some situations.

The good news is you can turn off this feature by un-checking the "Use Dither" conversion option in the Color Settings dialog.



Mike Russell
www.curvemeister.com
www.geigy.2y.net
E
East75th
Jan 27, 2004
Mike Russell wrote:

By default Photoshop adds noise to the image when converting from 16 bits to 8 bits. There is logic behind this. It prevents banding in some situations.

With so few pixels changed, about 1 in 277, it’s difficult for me to imagine a situation where it would accomplish that.

The good news is you can turn off this feature by un-checking the "Use Dither" conversion option in the Color Settings dialog.

Thanks, that’s it. Talk about an obscure and hard to find option!

Dane
T
toby
Jan 27, 2004
East75th …
Mike Russell wrote:

By default Photoshop adds noise to the image when converting from 16 bits to 8 bits. There is logic behind this. It prevents banding in some situations.

With so few pixels changed, about 1 in 277, it’s difficult for me to imagine a situation where it would accomplish that.

Mike is perfectly correct – with any input which has more than 8 bits of dynamic range, adding noise before quantising will make a visible and effective improvement to the conversion to fewer bits (e.g. 16->8). It’s easy to give an example, and the method also applies to gradients – in fact wherever you are quantising a function which is continuous (or "smoother" than the output quantisation). Alternatively, error diffusion could be used (e.g. Floyd-Steinberg).

Picture a gradient in a 16-bit image from pixel value 0x100 to pixel value 0x200 (e.g. around 254 steps in value). If you quantise to 8 bits by simplistically truncating, you are effectively thresholding the function, and there will be a sudden transition (band) from pixel value 1 to pixel value 2, halfway across the gradient (assuming you round, e.g. by adding 128 before truncating).

If you add uniform noise with a magnitude of 256 before quantising (truncating), you will find that the pixel values in the 8 bit image are "dithered" in a distribution which reflects the shape of the deeper gradient (those discarded least significant 8 bits of image data). No sudden step is visible. In a more natural image (rather than this contrived example), the results are even more effective.

Before Photoshop dithered its gradients, I used to apply this method "manually" in PS to achieve smooth 8 bit gradients of, for example, 1% ink across an image. In this instance simple quantisation cannot do any better than maybe 2 or 3 wide steps across your *entire* gradient.

T

The good news is you can turn off this feature by un-checking the "Use Dither" conversion option in the Color Settings dialog.

Thanks, that’s it. Talk about an obscure and hard to find option!
Dane
L
llutton
Jan 27, 2004
1. Open a new 256×256 RGB image
2. Define a gray, e.g. <9, 9, 9>
3. Using the paint bucket, fill the image with the gray
4. Display the histogram. You should see a single straight line for each of the 4 charts with zero standard deviation
5. Convert to 16-bits, i.e. Image, Mode, 16-bits
6. Convert to 8-bits, i.e. Image, Mode, 8-bits

What would you gain by converting an 8 bit image to 16 bit? and then back to 8 bit. Wouldn’t that be like putting perfume on a pig? Sorry, I had to say that. Working on a 16 bit image is better, but the image should start out as 16 bit, I would think, as in a RAW image, or a 16 bit scan.
Lynn
D
drjohnruss
Jan 28, 2004
(LLutton) wrote

What would you gain by converting an 8 bit image to 16 bit? and then back to 8
bit. Wouldn’t that be like putting perfume on a pig? Sorry, I had to say that.
Working on a 16 bit image is better, but the image should start out as 16 bit,
I would think, as in a RAW image, or a 16 bit scan.

Certainly starting with a 16 bit per channel image (even if it is really only 10-12 bits from a digital camera) and working entirely in 16 bit mode until the final step when you reduce to 8 bits for printing is the best way to proceed, when you can. But even if you start with an 8 bit image, there are many advantages to promoting it to 16 bits before processing. It prevents a lot of loss of precision in processing operations – that includes filters as well as gamma adjustments and layer modes. Most workflow experts recommend working in 16 bit mode whenever possible, and that was one of the major driving factors behind Photoshop CS providing complete support for 16 bit images and layers.
F
Flycaster
Jan 28, 2004
"DrJohnRuss" wrote in message
(LLutton) wrote

What would you gain by converting an 8 bit image to 16 bit? and then
back
to 8
bit. Wouldn’t that be like putting perfume on a pig? Sorry, I had to say that.
Working on a 16 bit image is better, but the image should start out as 16 bit,
I would think, as in a RAW image, or a 16 bit scan.

Certainly starting with a 16 bit per channel image (even if it is really
only
10-12 bits from a digital camera) and working entirely in 16 bit mode
until the
final step when you reduce to 8 bits for printing is the best way to
proceed,
when you can. But even if you start with an 8 bit image, there are many advantages to promoting it to 16 bits before processing. It prevents a lot
of
loss of precision in processing operations – that includes filters as well
as
gamma adjustments and layer modes. Most workflow experts recommend working
in
16 bit mode whenever possible, and that was one of the major driving
factors
behind Photoshop CS providing complete support for 16 bit images and
layers.

Doc, I think she’s right. You don’t gain anything by stretching out the bit info; at least, nothing I’ve read or heard indicates that is the case. The reason CS includes so many new high bit features is due to the fact that so many people were/are bringing in high bit files to start with, not because they wanted/want to work on 8 bit files in high bit.

—–= Posted via Newsfeeds.Com, Uncensored Usenet News =—– http://www.newsfeeds.com – The #1 Newsgroup Service in the World! —–== Over 100,000 Newsgroups – 19 Different Servers! =—–
WS
Warren Sarle
Jan 28, 2004
"Flycaster" wrote in message
"DrJohnRuss" wrote in message
… But even if you start with an 8 bit image, there are many advantages to promoting it to 16 bits before processing. It prevents a
lot of
loss of precision in processing operations – that includes filters as
well as
gamma adjustments and layer modes. Most workflow experts recommend
working in
16 bit mode whenever possible, and that was one of the major driving
factors
behind Photoshop CS providing complete support for 16 bit images and
layers.
Doc, I think she’s right. You don’t gain anything by stretching out the
bit
info; at least, nothing I’ve read or heard indicates that is the case.

I suspect that Dr. Russ may know something about numerical analysis, unlike most Photoshop users. Numerical analysts tend to be horrified by Adobe’s programming. Perhaps unduely so. Our customers may complain bitterly about an error in the 15th significant digit. Photoshop routinely produces errors in the 3rd digit and most people don’t notice.

Most Photoshop operations incur numerical error. This error can accumulate if you perform multiple commands or use multiple layers. Some operations, such as a small Curves adjustment, are nothing to worry about. Other operations, such as conversion between color modes with large Curves adjustments, can incur noticeable error, especially in high-quality prints of images with smooth gradients like a clear blue sky. Using 16 bits, the accumulation of error is negligible except in the most extreme cases, regardless of whether the image started as 8 or 16 bits.

Here’s an example where I converted an 8-bit RGB sky photo to LAB, applied a steep curve, and converted back to RGB:
http://home.nc.rr.com/sarle/Test5-RGB8-Curved.tif
http://home.nc.rr.com/sarle/Test8-RGB8-Curved.tif
One of these was done all in 8 bits. The other started with the same 8-bit image but I did the conversion and curve in 16 bits. Can you tell which is which? Maybe not just by looking at them on a monitor, but you can tell from the histograms. If you stacked up several curves layers, the difference might become obvious. I’ll try that when I get CS.

I have a bad habit of applying curves repeatedly using actions. This can produce severe posterization in 8 bits, but the results are not intended to look like realistic photos. So I think Dan Margulis is right for most practical purposes regarding photo processing.

Man, those Adobe programmers have cushy jobs! Getting away with errors in the 3rd digit!
F
Flycaster
Jan 28, 2004
"Warren Sarle" wrote in message
"Flycaster" wrote in message
"DrJohnRuss" wrote in message
… But even if you start with an 8 bit image, there are many advantages to promoting it to 16 bits before processing. It prevents a
lot of
loss of precision in processing operations – that includes filters as
well as
gamma adjustments and layer modes. Most workflow experts recommend
working in
16 bit mode whenever possible, and that was one of the major driving
factors
behind Photoshop CS providing complete support for 16 bit images and
layers.
Doc, I think she’s right. You don’t gain anything by stretching out the
bit
info; at least, nothing I’ve read or heard indicates that is the case.

I suspect that Dr. Russ may know something about numerical analysis, unlike most Photoshop users. Numerical analysts tend to be horrified by Adobe’s programming. Perhaps unduely so. Our customers may complain bitterly about an error in the 15th significant digit. Photoshop routinely produces errors in the 3rd digit and most people don’t notice.
Most Photoshop operations incur numerical error. This error can accumulate if you perform multiple commands or use multiple layers. Some operations, such as a small Curves adjustment, are nothing to worry about. Other operations, such as conversion between color modes with large Curves adjustments, can incur noticeable error, especially in high-quality prints of images with smooth gradients like a clear blue sky. Using 16 bits, the accumulation of error is negligible except in the most extreme cases, regardless of whether the image started as 8 or 16 bits.
Here’s an example where I converted an 8-bit RGB sky photo to LAB, applied a steep curve, and converted back to RGB:
http://home.nc.rr.com/sarle/Test5-RGB8-Curved.tif
http://home.nc.rr.com/sarle/Test8-RGB8-Curved.tif
One of these was done all in 8 bits. The other started with the same 8-bit image but I did the conversion and curve in 16 bits. Can you tell which is which? Maybe not just by looking at them on a monitor, but you can tell from the histograms. If you stacked up several curves layers, the difference might become obvious. I’ll try that when I get CS.
I have a bad habit of applying curves repeatedly using actions. This can produce severe posterization in 8 bits, but the results are not intended to look like realistic photos. So I think Dan Margulis is right for most practical purposes regarding photo processing.
Man, those Adobe programmers have cushy jobs! Getting away with errors in the 3rd digit!

That’s interesting, but you certainly read far more (or a different tangent) in the Dr.’s post than I did. Anyway, so you agree there is no benefit of high bit worklfow for an image that starts out in life as an 8 bit image? It seems so, but I’m unsure…

(also, what the hell are those files? I’m on cable and dropped the download after it had been running for 30 seconds, an *executable* started to run, and my virus checker clamped up tighter than a drum. A little heads-up would’ve been thoughful.)

—–= Posted via Newsfeeds.Com, Uncensored Usenet News =—– http://www.newsfeeds.com – The #1 Newsgroup Service in the World! —–== Over 100,000 Newsgroups – 19 Different Servers! =—–
WS
Warren Sarle
Jan 28, 2004
"Flycaster" wrote in message
"Warren Sarle" wrote in message

Here’s an example where I converted an 8-bit RGB sky photo to LAB, applied a steep curve, and converted back to RGB:
http://home.nc.rr.com/sarle/Test5-RGB8-Curved.tif
http://home.nc.rr.com/sarle/Test8-RGB8-Curved.tif
One of these was done all in 8 bits. The other started with the same
8-bit
image but I did the conversion and curve in 16 bits. Can you tell which is which?

(also, what the hell are those files? I’m on cable and dropped the
download
after it had been running for 30 seconds, an *executable* started to run, and my virus checker clamped up tighter than a drum. A little heads-up would’ve been thoughful.)

They’re TIFF files. You can tell by the .tif extension. TIFF files are large,
but it would have been pointless to provide JPEG files, since JPEG compression would have swamped the differences. The executable was probably whatever your system uses to display TIFF files.

As for your virus checker, I’d say get a new one.
MR
Mike Russell
Jan 28, 2004
Warren Sarle wrote:
[re circumvention of numerical error by working in 16 bits]

Here’s an example where I converted an 8-bit RGB sky photo to LAB, applied a steep curve, and converted back to RGB:
http://home.nc.rr.com/sarle/Test5-RGB8-Curved.tif
http://home.nc.rr.com/sarle/Test8-RGB8-Curved.tif
One of these was done all in 8 bits. The other started with the same 8-bit image but I did the conversion and curve in 16 bits. Can you tell which is which? Maybe not just by looking at them on a monitor, but you can tell from the histograms. If you stacked up several curves layers, the difference might become obvious. I’ll try that when I get CS.

I suspect the difference is due to the noise added during the 16->8 bit conversion.


Mike Russell
www.curvemeister.com
www.geigy.2y.net
BV
Bart van der Wolf
Jan 28, 2004
"Mike Russell" wrote in message
SNIP
I suspect the difference is due to the noise added during the 16->8 bit conversion.

We do not know Warren’s Color settings, nor do we know how noisy the original was. Test5 looks like the 8-bit version to me, coming from a digicam.

Bart
F
Flycaster
Jan 28, 2004
"Warren Sarle" wrote in message
As for your virus checker, I’d say get a new one.

Thanks, I’ll send Norton the link and find out what they think happened. In the meantime, was that a yes or a no?

—–= Posted via Newsfeeds.Com, Uncensored Usenet News =—– http://www.newsfeeds.com – The #1 Newsgroup Service in the World! —–== Over 100,000 Newsgroups – 19 Different Servers! =—–
S
saswss
Jan 28, 2004
In article <4017d1e8$0$320$>,
"Bart van der Wolf" writes:
"Mike Russell" wrote in message
SNIP
I suspect the difference is due to the noise added during the 16->8 bit conversion.

Only a little. Take the difference between the two images and run Auto Levels. You’ll see that the differences have strong systematic trends related to the direction of the original gradient, as well as some noise.

We do not know Warren’s Color settings, nor do we know how noisy the original was. Test5 looks like the 8-bit version to me, coming from a digicam.

Right!

The RGB space was Adobe RGB (1998).


Warren S. Sarle SAS Institute Inc. The opinions expressed here SAS Campus Drive are mine and not necessarily
(919) 677-8000 Cary, NC 27513, USA those of SAS Institute.
F
Flycaster
Jan 29, 2004
"Warren Sarle" wrote in message
They’re TIFF files. You can tell by the .tif extension. TIFF files are large,
but it would have been pointless to provide JPEG files, since JPEG compression would have swamped the differences. The executable was probably whatever your system uses to display TIFF files.
As for your virus checker, I’d say get a new one.

I didn’t meant to jump on you, but the last time I had a system freeze to the point where I had to *unplug* the damn thing, 2 very ugly days of reconstruction followed. That was a first for me in XP pro, and not very reassuring.

According to my local guru, it turns out to be a very common problem – the recent install of a new program (ahem) also dumped Quicktime in my box. Apparently, it hijacked a bunch of MIME file extensions (tiff included) when it installed itelf as an IE plug-in. I never noticed it before since it only invokes the call in IE, and I guess this is the first TIFF I’ve opened in IE since. Anyway, trying to open your large TIFF’s caused Quicktime, and then IE to crash, and Norton ultimately did it’s job by locking the box up. Quicktime gone, end of problem. Live and learn.

—–= Posted via Newsfeeds.Com, Uncensored Usenet News =—– http://www.newsfeeds.com – The #1 Newsgroup Service in the World! —–== Over 100,000 Newsgroups – 19 Different Servers! =—–
E
East75th
Jan 29, 2004
LLutton wrote:

1. Open a new 256×256 RGB image
2. Define a gray, e.g. <9, 9, 9>
3. Using the paint bucket, fill the image with the gray
4. Display the histogram. You should see a single straight line for each of the 4 charts with zero standard deviation
5. Convert to 16-bits, i.e. Image, Mode, 16-bits
6. Convert to 8-bits, i.e. Image, Mode, 8-bits

What would you gain by converting an 8 bit image to 16 bit? and then back to 8 bit.

What I was trying to do was establish the cause of the noise. It seems reasonable that the pixels should be invariate going through the chain of operations (8-16-8). Mike Russell (above) provided the reason. Obviously there’s no practical benefit to be gained other than to eliminate causes other than PS or hardware.

Dane

Master Retouching Hair

Learn how to rescue details, remove flyaways, add volume, and enhance the definition of hair in any photo. We break down every tool and technique in Photoshop to get picture-perfect hair, every time.

Related Discussion Topics

Nice and short text about related topics in discussion sections