I've updated my AmazonS3 CFC to include local caching of files. The new source is available here: amazons3.cfc.txt, or visit the project page. The only public API change from the first version is the addition of an optional third parameter to the init method for specifying the local directory to use as a cache. If you're doing repetitive read operations on S3-stored assets, using the local cache can speed things up significantly, though it is not without drawbacks.
In particular, the CFC assumes that it is the only interface to the S3-stored assets that it is used to interface with. If you use any other mechanism to manipulate those assets (including multiple CF applications), you'll run into issues. The cache itself is the canonical source for cache state, so emptying the cache folder will always revert the CFC back to S3's state if the cache is out of sync.
If you cluster multiple CF instance together, you can still use the local cache, but you must use a single cache for all CF instances. I.e. the cache must reside on a disk shared by all instances, rather than each instance having it's own separate cache. This reduces the performance benefit slightly (since you must use a non-local disk), but it will still be faster than S3.
The CFC exposes a deleteCacheFor() method that accepts a bucket and objectKey pair that can be used for managing the cache outside of actual S3 operations. If you have multiple CF instances that cannot share a single local cache, or for which the network overhead for a shared cache is still undesirable, you can use this method to synchronize the instances' caches via JMS or something. Obviously that's far outside the scope of the CFC itself, but the hook is there to support it. Note that you must delete cache when overwriting an asset on S3, as the local cache will not pick up the change in S3, it will continue to return the old version if it's not cleared.
[...] Barney Boisvert has updated his Amazon S3 CFC [...]
I am starting to use S3 with my CF site. Is there anything I need to know before I try using the s3.cfc? Can I get directory listings with it? That's my main problem right now in trying to integrate with S3.
Eric,
No, there isn't a facility for listing files on S3. My assumption was that you already know what's on S3. I know there's another S3 integration package on riaforge.org, but I've never used it. There are also various command-line utilities that I've used with great success from PHP (ick!) apps.
That being said, list functionality would be trivial to add.
Barney I'm wondering if you know how to set ACL in script on the COPY function. I'm using S3.CFC and I did try adding:
: cfargument name="acl" type="string" required="false" default="public-read"
to the end of the list of arguments up front and then adding the parameter at the end:
:cfset copyObject(arguments.oldBucketName,arguments.oldFileKey,arguments.newBucketName,arguments.newFileKey,arguments.acl)
but it's not working. I can PUT the object with the arguments fine but if I rename the object (which uses copy) then there is no default open access to read the file anymore. Do you have any idea how to go about this? I'm losing hair over it I think!
Best,
PJ
pj,
I've not actually tried this, but here is a patch from another user who needed a public ACL on resources and they use it in all their stuff successfully. I've only ever used protected resources, and I've never done COPYing, so I don't really know how to help you with your specific issue, but hopefully this will get you started on the road to fixing it.