..PSB format seemed like a "prayers answered" kind of thing when I learned about it. Now that I’ve gotten my design department their 2x2K/G5’s & 10.3.3 – each with 2Gb RAM – & the entire CS suite of apps, some of us are discovering we can’t save .PSB files bigger than 2Gb to our servers.
We’re not saving out of PSCS directly either: we discovered this by trying just that at first. Then we tried saving the files to our hard drives – works fine – but copying them to our servers doesn’t. Filer transfer gets to 1.99 Gb & then "disk error – not enough room" even with multi-Gig’s free space!
We can copy them from Mac to Mac across the network using File Sharing, but not to the server.
Is it PSCS’s .PSB or is it our Linux-based .8 Tb RAID server that’s at fault somewhere? Incidentally we tried using NFS sharing initially but file access & directory indexing was so painfully slow we’ve reverted to Appleshare/Appletalk.
Still it seems like there ought not to be a file size limit trying to get a file to "stick!"
No; you should be able to copy the files by dragging. If you can’t, it’s a network and/or server problem. The Adobe gurus have said repeatedly that it’s impossible to foresee all the imponderables of all types of network configuration. If the network won’t allow you to save a closed file and returns a "disk error – not enough room" message, then someone has deliberately or unwittingly set a 2GB file size limit in the network.
The server at my day work (not in the graphics or photo industry) has an arbitrary 512KB file size limit.
We can copy the BIG stuff from Mac to Mac across our network, even to Macs still running OS 9. As long as they can be "seen" on the network & mounted on the file host’s Mac Desktop, they’ll accept file copies via the Finder.
It’s sending these biggies to the server that the plot thickens. Files larger that 1.99 Gb just don’t. Not saved out of Photoshop to the server mind you, like Document 322391 mentions as problematic (I NEVER have advocated working this way – I’ve always understood the limitations of server-based workflows as long as folks are trying to open & save files resident on the server), but just copying from local hard drivve to the server.
It’s sending these biggies to the server that the plot thickens. Files larger that 1.99 Gb just don’t.
The 1.99 (or 1,97 – depends who you talk to) limit is a known problem with some servers. Have you tried tranfsfering the files using ftp? This may be a workable workaround for you.
The Linux option was a beancounter-inspired choice at the time; we’d been using an SGI-based SCSI RAID system running Xinet FullPress but the associated costs of ownership (OS upgrades, SCSI RAID upgrades & expansion, FullPress upgrades, tape back-up, management, etc.) was deemed "unacceptable."
We’re now considering our options.
"Have you tried tranfsfering the files using ftp?"
Rene, that’s worth looking into, though I know the users will resist the option – if it works. As an end-run around the current limitation though it could be worthwhile for the IT folks.
We went from 120 Gb (in the previous configuration) to just under 750 Gb about a year ago & at present there’s only 14 Gb free space remaining. It’s become a question of whose responsibility it is to "manage" the files needed for job production and archiving.
Mike, that had been our experience as well, but selling the costs associated with keeping the system up-to-date proved impossible. The folks upstairs decided that, since the lease was up on the hardware part, it was less costly to just replace the thing with a "new player" than to add the software & hardware upgrades enabling it to remain a viable file server.
Not sure though if we’d not have encountered the same ceiling as we’d been running it as an AppleTalk file server & that seems to be the failing of the AppleTalk (NetaTalk?) emulation on the Linux server too.
An X-serve would be my next choice, but Apple is so deep into development now, that who knows what will break down the road.
To my astonishment, I’ve recently created a G4 450 OSX server, (non server Panther software) with 1000 base T network, and it’s really frikin fast for file service.
That’s particularly interesting to me as the previously used G4’s (all GigEthernet dual 450’s) are coming off line. It just might be a reasonable plan to repurpose one as a file server albeit with significantly enhanced storage capacity.
I have an associate in another geographical location, who opted for an early X-serve, who’s described encountering similar speed bumps as those we’ve seen using Linux.