Well if the file started off as a 16 bit file yes its destructive.
Once you go from 16 bit to 8 bit you’ve gone to 8 bit. Any extra levels of tone are gone. Whether it makes a difference depends on what you do with the image.
AFAIK the only advantage of going from 8 bit to 16 bit is if you plan on doing heavy adjustment work on Alpha channels for masks, etc., where you are essentially creating new elements in 16 bit instead of 8 bit.
IMHO, the times when you’ll find any difference between images worked on in 16 bit or 8 bit are few (if in doubt whether it might make a difference I’ll stay in 16 bit longer if I can) and if you work on large files with lots of layers the performance hit from working in 16 bit is huge, even on a powerful system. I’m sure other’s will express differing opinions.
I generally do my heavy adjusting in either the Raw or in 16 bit and then convert to 8 bit. I will sometimes convert back to 16 bit if I discover some major mask work is needed (usually by making huge adjustments to a copy of a channel) that I didn’t know about before moving to 8 bit. Unlike the advantages of staying in 16 bit throughout, which I haven’t been able to show, I have personally actually seen times when this helps.
To clarify, the images start in 8 bit, then I convert to 16 bit to fix skies, add shadows.
I think Ric may have answered my question, the rgb to cmyk analogy is the one I was looking for.
Any other opinions welcomed, though. Thanks.
If »Use Dither (8-bit/channel images)« is selected in the color settings going from 16-bit to 8-bit does add noise.
So that might be considered destructive, even if it is what helps avoid visual banding. Occasionally I have found it beneficial to work in 16-bit when gradients were concerned (blurring them somewhat if they had started out as 8-bit).
If feasible data-wise I would recommend keeping the layered-file in 16-bit; but that might be more of a personal preference than due to quality-concerns.