8bit -> 16bit, resampled, or am I fooling myself?

CM
Posted By
Carl_Merritt
Mar 7, 2004
Views
858
Replies
10
Status
Closed
I found that just changing an image to 16-bit mode in PS didn’t actually do much as far as image information.

If I take an 8bit image to 16bit mode, then do a steep curves adjustment, then look at the histogram, I get lots of gaps in the info, just as if I had done that adjustment on an 8-bit image.

However, I found if I changed the color space to Lab, then go 16bit mode, then change the image back to RGB – if I then do that same curves adjustment I don’t get the gaps in the histogram.
The histogram reflects more of what I get doing that curves adjustment to a native 16-bit image.

So, is this a true workaround to resample an 8bit image to an actual 16bit image, or am I just fooling myself?

MacBook Pro 16” Mockups 🔥

– in 4 materials (clay versions included)

– 12 scenes

– 48 MacBook Pro 16″ mockups

– 6000 x 4500 px

DM
Don_McCahill
Mar 7, 2004
This is just an assumption, but here goes.

Going from 8 to 16 doubles the available data, but does not add any. So your histogram shows the gaps.

When you change the color mode, after going to 16 bit, the color mode makes use of the additional space, and uses it to better represent what was in the old color mode. Thus your histogram is not showing the gaps you saw the first time.

That said, why do you want to convert 8 to 16? I can see (at least theoretically) the use for 16 bit mode if you are acquiring images with that level of detail, either from a camera or scanner. But I don’t see the value of converting 8 bits of data to 16.
PF
Peter_Figen
Mar 7, 2004
If you have the Dither option checked when you make your color mode changes, that will add enough random noise to fill in the Histogram, and since you’re already in 16 bpc at that point, you won’t see any gaps from additional adjustments. Doesn’t mean the image in any better though, just that is has some noise added to it.
CM
Carl_Merritt
Mar 7, 2004
I did another test with just a gradient and found banding in the 8bit, but not when I went through my cook-up to 16bit.

However, it is very noisy, as Peter Figen suggests it might.

Don McCahill: "When you change the color mode, after going to 16 bit, the color mode makes use of the additional space, and uses it to better represent what was in the old color mode. Thus your histogram is not showing the gaps you saw the first time."

Does this mean PS is perhaps doing a linear interpolation between the original 8bit values?

This is what I’m trying to do:
I am working with DV footage, but do not have capability to capture it any way other than 8bit. However, we will do fairly strong curves adjustments, and effects/compositing, and I was hoping to find a way to help maintain the image quality – if that makes sense.
Since I could not find a way to do my steps in AE, right now I plan to batch process the captured frames through PS – unless it’s a waste of time.

I suppose I will do a test, but I wanted to find out if I had stumbled onto something, to boost my confidence in this procedure.

Thanks.
KV
Klaas_Visser
Mar 7, 2004
I don’t think there is much point in converting an 8-bit image to 16-bit, as it has no idea what the extra 8 bits should comprise of – it probably just pads them with zeros, so there is no more colour detail added.

16-bits is only useful if you’ve captured an image with higher than 8-bits, through scanning or whatever.

cheers
Klaas

EDIT – originally posted via offline newsreader so I didn’t see the other replies, which have more information about how it does the conversion, so I’ve learnt something 🙂
TT
Toby_Thain
Mar 7, 2004
Does this mean PS is perhaps doing a linear interpolation between the original 8bit values?

Interpolation is a way of filling the gaps when increasing *spatial* resolution. AFAIK there is no foolproof way to "guess" at the missing low order bits on a per-datum (pixel) basis, when going to a deeper channel format. (Although methinks a hack involving FFT might do something useful.)
JS
John_Slate
Mar 7, 2004
Take a 1 bit image that contains two levels (black and white) and convert it to grayscale it still has only 2 levels but in a mode that supports 256 levels.

If you take that grayscale and then convert it to 16bit/channel, it still only has 2 levels but in a mode that supports 65,536 levels.

If you just bring it back to 8bit without any editing you will still just have 2 levels.

The same is true of converting full color 8bit files to 16bit. You float the 256 levels into the 65,536, but you don’t interpolate some pixels to intermediate values.

If you do some tonal work in 16bit, you will change some of the values to intermediate values, which when converting back to 8bit will induce dithering to simulate said intermediate value. But essentially there is little to be gained by doing this.

If there is banding in an 8bit image this will not help.
DM
Don_McCahill
Mar 7, 2004
I don’t think there is any interpolation going on when you change from 8 to 16, that is why there are gaps in your histogram. The interpolation happens when you change modes, which fills your gaps.

Again, this is guessing. I hope Chris Cox drops into this thread … he would know.
CM
Carl_Merritt
Mar 7, 2004
Okay, I believe you guys.
But then I have a naive question. If it does not help to make adjustments to an image in a higher bit mode, why do so many CC software tout that they do their operations in higher bit modes?

For example, Color Finesse for AE advertisement:
"Floating Point Color Space

Unlike other color correction tools, Color Finesse isn’t limited to 8-bits per color channel. In fact, it’s not limited to 10- or even 16-bits per color. Color Finesse uses a 32-bit floating-point representation for each color channel, for a total of 96 bits per pixel.

By using floating point, Color Finesse is able to encode colors with great resolution, while at the same time maintaining tremendous latitude. And rounding error–the source of much of the "banding" you’ve seen–is virtually eliminated. Even extreme adjustments don’t cause clipping of out-of-range highlights and shadows. "

That says to me that basically their software resamples your 8-bit images, then handles all operations internally in floating-poing depth.

Wouldn’t that be similar as doing my idea to get an image to 16-bit, then doing CC, then going back to 8-bit?
And if it doesn’t make a difference, why would Color Finesse make a big deal about it?

I’m confused…
Thanks to all of you for your input and time.
BL
Bill_Lamp
Mar 7, 2004
I’m not familiar with Color Finesse but I see it this way.

If you have an 8-bit color file, it can do it’s thing with it. If you have a 16-bit color file, ditto.

If you have your 96 bit file (and where the heck do you get one of those???????????) and it will indeed work with them, it will again do what ever it does.

Some plug-ins only work with 8-bit files. Being able to work with 16 is a selling point even if your out put device only handles 8. Being able to say your product works with more may be of more use in advertising than in production.

That said (and I may be all wrong in the above), I work in 16-bit color as long as possible and only convert to 8 when I have to or to "lock down" a file for the Epson 2200. I really don’t see how sending a 16-bit file to an 8-bit printer can do any good (at best) and can see how it could possibly hurt (spool speed thus print speed slow down or the file being converted by something somewhere doing it’s own thing what ever that is to the picture)

Bill
L
LenHewitt
Mar 7, 2004
Carl,

Converting to 16-bit can minimize rounding errors

Must-have mockup pack for every graphic designer 🔥🔥🔥

Easy-to-use drag-n-drop Photoshop scene creator with more than 2800 items.

Related Discussion Topics

Nice and short text about related topics in discussion sections