"Chuck Snyder" wrote in message
[…] If I try to use
some of those ‘slow’ filters after viewing lots of pix, the time it takes for them to complete their tasks is very long indeed – and sometimes
results
in a freeze-up (should also mention that I have Win 98SE, which has its
own
set of issues).
Indeed Win98SE does. Not as bad as WinME though, so there’s a little Pollyanna opportunity for you there. 🙂
Do you suppose some of these filters are highly iterative in nature and get ‘compute-bound’ on a resource-limited machine?
The biggest question is "is there disk activity when running the filters?" A second question is "what happens if you run the same filter on the same image twice?"
You wrote "I also find that if I’ve had a lot of pictures open, that even if I close them, the video memory isn’t necessarily freed up." What are you using to determine whether memory has been freed or not? Also, do you really mean "video memory". In PC jargon, "video memory" refers to the RAM that’s installed on the video card. It’s not used by programs like Elements, except inasmuch as Elements — like any other program — indirectly causes data to be written to the video memory when it asks the operating system to display something on the monitor.
Some thoughts on what you may be seeing (without a very detailed description of the problem, all I can do is guess, and even with a detailed description, without being there, I still can’t make any really precise suggestions)…
Generally speaking, the number of images you have open should not affect the performance of any given filter. The exception to this is that if you use a filter on an image after you have spent some time working on *different* images, you may find that some, most, or even all of the data for that image has been "swapped out" to the hard disk while you were working on those other images.
The consequence of this would be that, as the filter works its way through the image, all of that data needs to be read back into the RAM so that the CPU can do the filter’s processing on the data. To make matters worse, the data for the other images (the ones you were working on before using the filter on the active image) may have to be written back out to the disk to make room for the image being processed. Finally, generally speaking the operating system won’t just start reading all of the image back in all at once; it’ll do it piecemeal as the filter works its way through the image, which is about the slowest way to read data from the disk.
So, going back to those first two questions: if there is disk activity while the filter is running, then you are almost certainly seeing a disk swapping issue. If you can run the filter two times in a row and get different results — poor performance the first time, better performance the second — then this would reinforce the theory that you’re low on RAM (especially if there’s a noticeable decrease in disk activity the second time). The reason being that, the first time through the image, you bring as much of the image into RAM as possible (possibly all of it), and so the second time through, the only memory usage that image is competing with is itself (assuming it doesn’t all fit at once), rather than the other images you were working on (since they all got swapped out to disk the first time you ran the filter).
Now, this could even apply even if you’ve closed the other images. It really just depends on what order you do things. Even when you close the other images, that will not cause the operating system to read the data for the remaining image back into RAM. The only thing that will do that is to actually work on the image again. Basically, if something’s been written to disk, the operating system won’t read it back in until the very last minute; that is, not until it actually needs that data again.
So working on those other images could push the first image’s data out to disk, where it will sit until you start editing that image again. Even closing the images will not change that state of affairs.
Okay, so that’s my "don’t have a lot of information, but here’s a thought" reply. A few other comments before I go:
* All of the above assumes a bug-free program. There’s a programming error called a "memory leak" that happens when the user of the program is finished with a particular chunk of data, but the program does not actually tell the operating system that it’s done with it. There’s a variety of ways to make this error, from a programming point of view, but one way to categorize the error into two classifications is that in some ways it happens, the program still knows about the data and is hanging on to it for no good reason, whereas in some other ways, the program itself has actually forgotten about the data, even though it didn’t tell the OS it’s done with it. My experience has been that the quality of Adobe software is reasonably high, but even they are not immune to accidents. So, there could be a memory leak in Elements that is causing the problem you’re seeing. Depending on the nature of the bug, you may or may not see disk swapping along with it. The second way I mention is actually a little better (if you’re going to have such a bug, that is), since because the program has forgotten about the memory, it will eventually get written to the disk, and at that point will not cause any further performance problems (assuming you have enough disk space, of course 🙂 ).
* You ask about whether "some of these filters are highly iterative in nature and get ‘compute-bound’ on a resource-limited machine". I’m not really sure what you’re asking there, since I don’t know your definition of "resource-limited". However, there are two main bottlenecks with respect to these filters: CPU/memory; and disk. If the entire image can fit into RAM *and* is already resident in RAM before running the filter (that is, there haven’t been other competing pieces of data in use), the disk should not be a bottleneck. If you’ve solved the RAM issue (which is mostly what I was writing about in this message), there is still the question of how fast the CPU can get through the image. This depends on two different things, either of which could be the bottleneck: memory access speed (which determines how quickly the CPU can read a piece of data from the system RAM); and CPU speed (which determines how long the CPU will take to process each pixel once it’s gotten the data from RAM).
Pretty much *all* of the filters I see in Elements (and I assume Photoshop) are indeed "highly iterative in nature" (by definition since, after all, they iterate through the entire image one pixel at a time 🙂 ). As such, they are definitely "compute-bound" (that is, dependent on the processor’s ability to process data), except when they are "disk-bound" (which is what happens when the image is not entirely in RAM when the filtering starts).
Bottom line, things that will affect how long it takes to filter an image:
* The size of the image. This is a major factor. Each time the image height and width double, the amount of data to process quadruples. Data size is exponential with respect to the apparent image size, and data size affects the time it takes to filter an image linearly (thank goodness it’s only linear…there are some computational problems where increases in data cause the time to increase with the square or cube of the data size 🙂 ). Of course, normally you don’t have much control over this. You selected the size of the image because that’s the number of pixels you need, and you have to live with that.
* Amount of RAM. This is a major factor, and in fact has the potential to completely swamp any other factors if your amount of RAM is small enough. A faster hard drive will help a little, but the real solution is to have enough RAM installed in the machine that the data doesn’t have to visit the hard drive much, if at all. Keeping all the data in RAM rather than swapping it to disk can improve processing time by several orders of magnitude in some cases.
* Disk fragmentation. This is related to the disk swapping and disk speed. If you have enough RAM and aren’t swapping, disk fragmentation doesn’t matter. But if you ARE swapping, having a fragmented disk can make an already bad situation MUCH worse. Just for example: I do a bit of video editing, and the difference between transcoding (changing from one format to another) a video file can be along the lines of 3 minutes for an unfragmented file versus 30 minutes for a fragmented one. In the same way that having enough RAM makes the difference between being CPU bound and being disk bound, the state of fragmentation on your disk makes the difference between waiting on data to be transferred from disk to RAM, versus waiting on the disk head to move back and forth around the disk (the format is MUCH faster than the latter; even though moving data to and from the disk is still a lot slower than moving data between RAM and the CPU, it’s WAY faster than the physical motion of the read/write head on a disk drive).
* CPU speed. Once you’ve solved the disk/data size issues, the CPU speed is probably the next big factor. However, it takes a large change in CPU speed to see a difference in processing speed. A 2Ghz CPU isn’t going to get through your image in half the time a 1Ghz CPU will. Best case, you would see between a 30-40% decrease in time spent, and depending on a variety of other issues, you may not even see that much. Even so, a faster CPU is always helpful.
* Memory access speed. Most computers are designed so that the RAM access is appropriate for the speed of the CPU. However, it certainly is possible to find a computer built with components that aren’t exactly right for each other, and where even though the CPU is fast, the components that handle moving data between the CPU and the RAM cannot keep up, leaving the CPU starved for something to do when dealing with computationally intensive tasks like image editing. "Front side bus" speed is what you care about here, along with the type of memory interface (right now "double-data rate" combined with "dual-channel" will provide the fastest memory access for a given front-side bus speed). Unless I had a computer that was under-performing by a ridiculous amount (e.g. 2Ghz machine that’s actually slower than a comparable 1Ghz machine), I wouldn’t spend much time worrying about memory access speed, since chances are the computer is put together right.
Sorry, I guess even the bottom line got kind of long. Hopefully somewhere in there is an answer to your question. 🙂
Pete