Downsizing methods

BV
Posted By
Bart van der Wolf
May 3, 2004
Views
542
Replies
10
Status
Closed
FYI

I’ve put up a first version of a webpage reviewing different resizing/downsampling methods:
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sam ple.htm

Many original images, from scanners or digicams are too large for webdisplay. However, downsizing without proper precautions will produce resampling artifacts. The methods described allow to improve the image quality if downsizing needs to be applied.

Bart

Must-have mockup pack for every graphic designer 🔥🔥🔥

Easy-to-use drag-n-drop Photoshop scene creator with more than 2800 items.

MR
Mike Russell
May 3, 2004
Bart van der Wolf wrote:
FYI

I’ve put up a first version of a webpage reviewing different resizing/downsampling methods:
http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sam ple.htm
Many original images, from scanners or digicams are too large for webdisplay. However, downsizing without proper precautions will produce resampling artifacts. The methods described allow to improve the image quality if downsizing needs to be applied.

Bart

Thanks, Bart, for an original and very thorough article. Not much has been done on this important topic and you’ve given us a lot to think about. —

Mike Russell
www.curvemeister.com
www.geigy.2y.net
J
JJS
May 3, 2004
"Mike Russell" wrote in message
Thanks, Bart, for an original and very thorough article. Not much has
been
done on this important topic and you’ve given us a lot to think about.

FWIW, CS has new sampling algorithms for downsampling.
B
bhilton665
May 3, 2004
From: "jjs"

FWIW, CS has new sampling algorithms for downsampling.

Bart checked both new CS bicubic methods (sharper and smoother) in his tests.

I’ve been using bicubic sharper to downsample since I got CS and it works well for me.
BV
Bart van der Wolf
May 3, 2004
"Mike Russell" wrote in message
SNIP
Thanks, Bart, for an original and very thorough article. Not much has
been
done on this important topic and you’ve given us a lot to think about.

You’re welcome. I thought it was intriguing to realize that by reducing the size, we increase the artifacts. Theoretically simple to predict results, but still revealing to see it with your own eyes.

Bart
BV
Bart van der Wolf
May 3, 2004
"jjs" wrote in message
SNIP
FWIW, CS has new sampling algorithms for downsampling.

Correct, so the CS results shown on the webpage may/will look different with older versions. That’s why I included a link to the target, so people can try it themselves.

Bart
BV
Bart van der Wolf
May 3, 2004
"Bill Hilton" wrote in message
SNIP
Bart checked both new CS bicubic methods (sharper and smoother) in his
tests.
I’ve been using bicubic sharper to downsample since I got CS and it works
well
for me.

It will even work better by pre-blurring in relation to the amount of size reduction. Bicubic sharper will require less additional sharpening, although the latter provides more control.

Bart
B
bhilton665
May 3, 2004
"Bill Hilton" wrote

I’ve been using bicubic sharper to downsample since I got CS and it works well for me.

From: "Bart van der Wolf"

It will even work better by pre-blurring in relation to the amount of size reduction.

What works for me with film scans is to not do ANY sharpening on the scan so it’s still a touch soft due to blooming from the scanner, then downsample in 50% steps using ‘bicubic sharper’ as many times as needed to get close to the target size, then doing it one more time to the target size. Since my film scans are all pretty much one of three basic sizes, depending on whether I’ve scanned 35 mm or 6×4.5 cm or 6×7 cm film, I have actions that do the steps and it’s pretty quick.

I think not sharpening the initial scan takes care of the pre-blurring 🙂 With ‘bicubic sharper’ I can see artifacting or a kind of crunchy look sometimes when I’ve downsampled sharpened files.

At any rate this method seems to work well for me on actual images, giving better results than I got pre-CS with a variety of techniques. I can see how it won’t work well with a target like you used though. For grins I’ll download your test pattern and see if downsizing in increments (probably smaller than 50% increments since the file is so sharp … maybe 10% steps?) gives better results than resampling in one fell swoop.

Bill
B
bhilton665
May 4, 2004
For grins I’ll download your test pattern and see if downsizing in increments (probably smaller than 50% increments since the file is so sharp … maybe 10% steps?) gives better results than resampling in one fell swoop.

Hi Bart,

I downsampled using bicubic sharper in CS in 90% steps (15 repetitions in an action to get to 206×206 pixels) and the results are somewhere between ImageMagick’s Triangle and Lanczos results, much better than resampling with one step. This is without a pre-blur.

Interesting experiment … thanks for posting. I think when I have images with plenty of fine details I’ll downsample in smaller increments using ‘bicubic sharper’ … perhaps at even finer steps it would be even better.

Bill
BV
Bart van der Wolf
May 4, 2004
"Bill Hilton" wrote in message
SNIP
What works for me with film scans is to not do ANY sharpening on the scan
so
it’s still a touch soft due to blooming from the scanner, then downsample
in
50% steps using ‘bicubic sharper’ as many times as needed to get close to
the
target size, then doing it one more time to the target size. Since my
film
scans are all pretty much one of three basic sizes, depending on whether
I’ve
scanned 35 mm or 6×4.5 cm or 6×7 cm film, I have actions that do the steps
and
it’s pretty quick.

That would work a bit better with unsharpened (Bayer CFA) digicam images and most Flatbed scanners. A good filmscanner (assuming a top-notch film) will have real detail down to the single pixel.

I think not sharpening the initial scan takes care of the pre-blurring 🙂
With
‘bicubic sharper’ I can see artifacting or a kind of crunchy look
sometimes
when I’ve downsampled sharpened files.

In print that would probably become invisible, but downsizing is more often done for web publishing, so artifacts are not welcome.

At any rate this method seems to work well for me on actual images, giving better results than I got pre-CS with a variety of techniques. I can see
how
it won’t work well with a target like you used though. For grins I’ll
download
your test pattern and see if downsizing in increments (probably smaller
than
50% increments since the file is so sharp … maybe 10% steps?) gives
better
results than resampling in one fell swoop.

The benefit of the target is that it’ll represent a worst-case scenario. If the method used behaves well on the target (no artifacts beyond the radius as a reduction percentage of the diagonal), there’s no need to question less critical subjects. Although pre-blurring may seem counter productive for increasing the quality, remember that any pre-blur introduced will also be reduced in size.

In Photoshop CS, an 8-b/ch Gaussian blur extends to no more than the following number of pixels:
Radius Pixels
0.0-0.1 0
0.2-0.5 1
0.6-0.8 2
0.9-1.2 3
1.3-1.6 4
1.7-2.3 5
2.4-2.6 6
2.7-2.9 7
3.0-3.3 8
3.4-3.6 9
3.7-3.9 10
Given the shape of a Gaussian curve, I’d say that e.g. a 5x reduction (to 20%) would tolerate up to a 1.2 radius before losing resolution that can’t be reliably restored by post-sharpening. It is also possible to apply a self-defined averaging filter with the Other|Custom filter.

Bart
BV
Bart van der Wolf
May 4, 2004
"Bill Hilton" wrote in message
SNIP
I downsampled using bicubic sharper in CS in 90% steps (15 repetitions in an action to get to 206×206 pixels) and the results are somewhere between ImageMagick’s Triangle and Lanczos results, much better than resampling with one step. This is without a pre-blur.

Pretty good, although that may/will work out differently in other software.

Interesting experiment … thanks for posting.

That’s the purpose. Especially if fixed reduction factors are used, one can optimize the action quite well with that target.

I think when I have images with plenty of fine details I’ll downsample in smaller increments using ‘bicubic sharper’ … perhaps at even finer steps it would be even better.

Yes, try finer and coarser because it depends on the software algorithms used. Let the worst-case target be your guide.

Bart

How to Improve Photoshop Performance

Learn how to optimize Photoshop for maximum speed, troubleshoot common issues, and keep your projects organized so that you can work faster than ever before!

Related Discussion Topics

Nice and short text about related topics in discussion sections