WordPress Shortcodes in CFML

If you've used WordPress, you may have run across shortcodes.  They're little directives you type into the editor box which then evaluate dynamically when the content is rendered on the outside of you site.  Plugins can register new shortcode handlers, which are implemented as a simple function callback.  It's a really simple way to expose psuedo-programming constructs to content authors in a safe manner (because you as the admin/developer can control which shortcodes are available), and without requiring any PHP knowledge and/or server access.

I needed this sort of functionality in CFML, so after playing with a few different syntaxes and parsers for them, I decided that a direct port of the WordPress shortcodes implementation was the best choice.  The code is pretty small (the grammar is context free and the parser is RegEx-based), and the port (including unit tests) took perhaps an hour and a half.  I had to roll my own REReplaceCallback UDF to match one of the PHP builtins, as well as change the callback API slightly to deal with CFML idoms, but it's a pretty direct port.

So what can you do with shortcodes?  Here's a little demo, both of the front side (the content) and the backside (the handlers and processing).  There is also a link to the source, of course.  And like all my projects, there is a project page where current project information will always be available.

More About Me

If you've had any contact with me (meat- or net-space) for the past few months/years, you undoubtedly know that my life has been a bit rough. But it seems that the worst is behind, which is a relief such that I cannot express. I've been officially divorced for two months, living on my own for three, and separated from Heather for 20. And in the past couple weeks I've managed to get my financials more or less sorted to be able to maintain a both a quality of life I'm largely content with and not have to work like a maniac.

For what are hopefully obvious reasons, I'd prefer to work just my day job, but doing that would require selling my bike, only treating my eczema to a minimal degree, and probably terminate pretty much all of my personal dev projects (including Pic of the Day). But I think I've managed to augment my salary with sufficient contract development to fund my addictions (to motorcycles, code, and healthy-ish skin), as well as cover a few point expenses (like a replacement laptop, some color for my apartment, clothes without holes/bloodstains, etc.).

Yes, I know that leading with money is both abnormal and socially frowned upon. Whatever. Financial security is really important to me for some reason or another. The good news is that since it started the post, it's more top-of-mind than my skin, which is a welcome change.

As expected, the termination of Heather and my cohabitation and then marriage has resulted in a huge improvement in my eczema. By and large the lesions are constrained to my hips and thighs (totally clearing my arms, hands, front, back, lower legs, and feet), and are much less severe than before. I still have a few spots where there are actual breaks, but the vast majority of my skin is again closed (though still somewhat itchy and discolored). My feet have almost entirely healed aside from one toenail, and you could inspect my hands with a magnifying glass and find no evidence that three months ago they looked like I had chicken pox and ringworm at the same time (which I did not, I might add).

However, the most pleasant change of all has been with friends. It's only been in the past month or two that I've really been able to appreciate how devastating the effect Heather and my relationship had on my other relationships. Don't get me wrong, I fully understood that I was not even close to being at my best, but really had no idea just how fucked up I was. Fortunately, through some seemingly unbounded grant of grace on everyone around me, I don't think I've lost a friend through this process. Some bounced back quickly, others have been harder, and some are still rocky, but I don't know that any were actually severed. Hopefully Heather and my relationship will have the same rebound, though obviously the wounds run deep and time will need to run it's course.

Now that things have settled down to a large degree, I've been able to refocus on various things that I enjoy, but haven't really had the time or energy to deal with. I'm playing music (mostly the piano – well, a cheap keyboard) on a nearly daily basis, have reinvigorated some personal coding projects (like Pic of the Day), started reading books again (Lolita is the current title), riding my motorcycle hard and fast without concerns of mental instability wrapping me around a tree, and just generally enjoying stuff.

Life, of course, is not without reminders of where I was. That continues to be difficult, and I still find myself inconsolably angry with an all-consuming desire to drink myself into a stupor. But that's far less frequent and less intense than it was. The psyche, like the skin, heals remarkably well; scars, however, are a fact of life.

I know a blog post mention is often a non-recognition, but I would like to explicitly mention three.

First and foremost, Kim, who can be in some ways fragile but in so many others is completely unalterable. You, of all people, have never let me down when it mattered, regardless of what I've done. This over the course of more than half my lifetime. One innocuous evening last fall you quite literally saved my life, simply by being you. I can't even imagine what life would be if I didn't know you.

Holly, another dear friend from the first half of my life. You swooped in out of nowhere last summer and gave me a wonderful recess when I so desperately needed one, despite a geographic separation measured in hundreds of miles. Unfortunately the darkest times were yet to come, and I fear those days have left a mark that will be long in vanishing.

Adam, Ali, Erin, and Carrie and Sim for grace on an otherwise grotesque All Saints Day morning, for being in my life, for a mind altering October afternoon, and for seemingly minor things too numerous to count.

Things are returning to "normal", I'm pleased to say, and that's a good thing. Normal will never be the same, but c'est la vie. The kids and I have settled into something of a routine with the little time we have together. Work has largely ceased being an escape from life, but rather a place I'd rather not be (aside from the paychecks, of course). Soon it will be spring, and then summer. The world goes 'round, and all of us with it.

Moving Pic of the Day Foiled Again

A while back I made an attempt to move Pic of the Day (NSFW) off of ColdFusion 8 and onto Railo 3.  I can't afford a license of CF9, so my only upgrade path is through a free alternative.  Unless someone has an extra four grand they want to give me….

Last time I was foiled by CFHTTP adding a spurious Content-Type header on GET requests, which breaks secure downloads from S3 (which is where I host all the content).  I reported the bug and it got fixed, but I hadn't had time to revisit the migration process so there it sat.  Until this evening, that is.

I'm glad to say that the issue with GET requests has been completely resolved.  The bleeding edge is also a lot smoother than last time I pulled down a new version, so props to those guys.  Setting up a migration test environment actually proved pretty straightforward, even with all the crazy Apache and OS integration PotD leverages.

As expected, there were errors on the first page load, but nothing some trickery with mappings and rewriting a couple query of queries couldn't fix.  After that, everything just worked.  Thumbnail generation, S3 access, emailing, everything.  Except that it wasn't everything.  Turns out that exactly the same problem I had with GET requests before has no manifested itself with DELETE requests.  So I'm again stuck.

The way PotD is implemented, images are spidered and pushed immediately onto S3.  Then they go through the filter pipeline, and many (most?) of them are deleted.  So being able to remove stuff from S3 is a pretty core feature, otherwise I'd have piles and piles of orphaned files up there, and that just costs me money for no reason.  Sadly, this makes Railo a no-go again, and leaves me with CF8 for a while longer.

I've actually got a lot of stuff in the works surrounding my personal sites and projects, but the CF to Railo conversion is one of the larger ones as well as the one with the largest potential impact on server resources (which I'm continually constrained by).  The move from JRun to Tomcat was a huge help, but I could definitely use more and Railo gives all apperances of being able to give it to me.  Also have some major WordPress infrastructure changes, a whole rebranding of this (my blog), and a few other corollary improvements.

The overarching goal is to simplify my URL space so I don't have as much interleaving between separate applications.  www.barneyb.com's URL space, for example, houses 3 different blogs, two static sites, and a pile of little CFML apps.  ssl.barneyb.com houses SVN, Trac, PotD, and several other CFML apps.  It's a mess, but that'll be a lot better, regardless of what happens with the CFML engine stuff.

Fake Filenames for Far Future Expires Headers

Everyone knows that one of the best ways to increase page performance is to reduce the number of HTTP requests required for related assets.  There are a pile of ways to approach this (JS/CSS aggregation, image sprites, caching), but the best (and simplest) is client-side caching.  It doesn't help your first-time visitors at all (which is why you still need to look at other optimizations), but for repeat visitors and people who stick around for more than one page caching is a huge win.

The key to all of this is the Expires header: you want to set an expiration way in future so the client (the browser) will cache the asset forever.  With Apache it's really simple:

LoadModule expires_module modules/mod_expires.so
ExpiresByType   application/x-javascript    "now plus 10 year"

That tells Apache to set an expiration on all JS files that is 10 years in the future.  In web time, that's pretty much forever.  Really simple, really effective.  You also want to use mod_headers to set Cache-Control and Pragma headers and remove the ETag and Last-Modified headers, as well as ensure we don't use file ETags:

LoadModule headers_module modules/mod_headers.so
Header  set     Cache-Control    "public"
Header  set     Pragma           ""
Header  unset   Last-Modified
Header  unset   ETag
FileETag        None

The problem is that if you ever change one of those JS files, clients who already have the old version will never know about it.  The solution is to never modify files, only create and delete files.  So if you have 'script.js' and you need to make an update, instead of pushing a new version of that file, you'd instead push 'script_2.js' (or whatever).  That way you're guaranteed that every client will download it afresh (with the long Expires header) because no one has ever seen the file before.  Next time you need to make changes, you'd push 'script_3.js'.

This quickly becomes a bit of a problem, because not only do you have to change the filename, you have to change all the references to the filename as well.  So a little JS tweak suddenly becomes a change to all your SCRIPT tags and republishing all your content.  Not too fun.  This is the problem I can help solve, and it'll be using our friend mod_rewrite (of course!).

Check this innocuous little condition/rule:

RewriteCond    %{REQUEST_FILENAME}    !-s
RewriteRule    (.*)_[0-9]+\.(js|css)$    $1.$2

That says any time you find a JS file that ends with an underscore followed by one or more digits, if it doesn't exist, remove the underscore and digits.  I.e. when 'script_2.js' gets requested, if that file doesn't exist, just serve back 'script.js' instead.  Now that's handy, because now we can use an arbitrary version number and they all hit the same file.  This is not a perfect solution, but it is ideal for the vast majority of cases, since you can just modify 'script.js' in place without a care in the world, and then reference your incrementing scripts to ensure cache refreshes.  Because HTTP caching operates on HTTP URIs, the fact that requests for 'script.js' and 'script_2.js' both hit the same file on disk is irrelevant; they're separate URIs, so they'll be cached separately.

That's not a solution in and of itself, however, because we still have to update all the SCRIPT tags to use a new filename (even though it'll end up hitting the same file on disk).  But now that the URI is divorced from the file itself, we don't have to keep anything in sync.

The last piece is to set up a global variable to use in your script suffixes:

<cfset application.scriptVersion = 2 />
...
<script type="text/javascript" src="/path/to/script_#application.scriptVersion#.js"></script>

Then any time you increment that scriptVersion variable, all your JS will suddenly become uncached and everyone will refresh.  So you can just hack away on script.js until you're happy, bump the variable up, and you're done.  No new files, no changing SCRIPT tags, super simple.

REReplaceCallback UDF

If you've used pretty much any modern language, you know all about callback functions.  Unfortunately CFML is capable of doing it, but the language itself doesn't leverage the feature anywhere.  In particular, a callback for the replace operation is of great value.  Ben Nadel has blogged about such things a couple times, and now I'm doing the same.  First, here's how you use it:

<cfscript>
string = "The catapult bifurcated the bearcat.";
fancyString = REReplaceCallback(string, "(\w*)(cat)(\w*)", doit, "all");
function doit(match) {
  if (match[2] EQ "") {
    return '#match[2]#<b><i>#match[3]#</i></b>#match[4]#';
  } else {
    return '<u>#match[2]#<b><i>#match[3]#</i></b>#match[4]#</u>';
  }
}
</cfscript>

As you'd imagine, the 'doit' function is invoked for each match of the regular expression (in this case looking for a literal "cat" surrounded by any number of word characters).  It then does a check on match[2] (the leading word characters) to see if it's empty and then forks based on that result (either underlining or not).  The 'match' array, as you might surmise, contains the matched expressions.  The first index is the entire expression, and an additional index is added for each subexpression in the regular expression.  In this case, there are three subexpressions, so the 'match' array will have length 3 + 1 = 4.

This particular conditional can be performed without a callback.  Here are a pair of REReplace calls that do it:

<cfscript>
string = "The catapult bifurcated the bearcat.";
fancyString = REReplace(string, "(\W|^)(cat)", "\1<b><i>\2</i></b>", "all");
fancyString = REReplace(fancyString, "(\w+)(cat)\w*", "<u>\1<b><i>\2</i></b>\3</u>", "all");
</cfscript>

The first one takes care of words starting with 'cat', the second words with 'cat' inside or at the end.  Note that this only works because the result of the first replace does NOT put word characters next to 'cat' in the replacement string.  If it did that, we'd be screwed, because the two replaces happen sequentially, not in parallel.

In this particular case, neither one of them is very readable.  :)  With a little cleanup and a well-named temp variable, I'd say the callback version has the potential to be more readable, but the pair of REReplaces is pretty much stuck as-is.  As things get more complicated, however, the callback approach becomes dramatically clearer.

The big win, of course, has nothing to do with conditional replaces.  Rather, it's the ability to execute arbitrary CFML code to generate the replace string based on the matched string.  Your callback can do anything you want: go hit the database, shell out to a web service, go grab a dynamically selected bean from ColdSpring and get a value from it, etc.  The sky's the limit.

Here's the REReplaceCallback UDF itself:

<cffunction name="REReplaceCallback" access="private" output="false" returntype="string">
  <cfargument name="string" type="string" required="true" />
  <cfargument name="pattern" type="string" required="true" />
  <cfargument name="callback" type="any" required="true" />
  <cfargument name="scope" type="string" default="one" />
  <cfscript>
  var start = 0;
  var match = "";
  var parts = "";
  var replace = "";
  var i = "";
  var l = "";
  while (true) {
    match = REFind(pattern, string, start, true);
    if (match.pos[1] EQ 0) {
      break;
    }
    parts = [];
    l = arrayLen(match.pos);
    for (i = 1; i LTE l; i++) {
      if (match.pos[i] EQ 0) {
        arrayAppend(parts, "");
      } else {
        arrayAppend(parts, mid(string, match.pos[i], match.len[i]));
      }
    }
    replace = callback(parts);
    start = start + len(replace);
    string = mid(string, 1, match.pos[1] - 1) & replace & removeChars(string, 1, match.pos[1] + match.len[1] - 1);
    if (scope EQ "one") {
      break;
    }
  }
  return string;
  </cfscript>
</cffunction>

Lots of stuff going on in there, but it's basically just doing a REFind with returnsubexpressions=true, ripping apart the string to pass the pieces to the callback function, and then reassembling the string afterwards.  It'd be trivially easy to make a REReplaceNoCaseCallback function, but I haven't done.  I've implemented the function with CFFUNCTION/CFARGUMENT tags so that I can have an optional fourth parameter on CF8, but the body as CFSCRIPT so that if you want to use the UDF in pure CFSCRIPT on CF9, you only have to rewrap the body (not reimplement).

This particular implementation differs from what you might expect in that the callback gets substrings instead of position/length tuples (i.e., the way REFind works).  I opted for this approach for two reasons: first it removes the need for the callback to have access to the raw string, and secondly all you do with the len/pos is rip the string apart to get the characters so why make every callback do it.

Why did I write this?  Just for fun?  No, not at all.  I needed a way of doing rich inline markup with tags that could be implemented via plugging for a project (you get one guess), and after playing with a couple formats I concluded that porting WordPress's shortcodes was as close to an optimal solution as I was going to get.  The shortcode implementation requires this sort of conditional replace operations, so I built this UDF.  If you do PHP, it's basically equivalent to preg_replace_callback but with CFML argument ordering.

Yes, I'll be sharing the CFC that implements shortcodes (complete with a port of the WordPress unit tests from PHPUnit to MXUnit), but not right this second.

<cffunction name="REReplaceCallback" output="false" returntype="string">
<cfargument name="string" type="string" required="true" />
<cfargument name="pattern" type="string" required="true" />
<cfargument name="callback" type="any" required="true" />
<cfargument name="scope" type="string" default="one" />
<cfset var start = 0 />
<cfset var match = "" />
<cfset var parts = "" />
<cfset var replace = "" />
<cfset var i = "" />
<cfloop condition="true">
<cfset match = REFind(pattern, string, start, true) />
<cfif match.pos[1] EQ 0>
<cfbreak />
</cfif>
<cfset parts = [] />
<cfloop from="1″ to="#arrayLen(match.pos)#" index="i">
<cfif match.pos[i] EQ 0>
<cfset arrayAppend(parts, "") />
<cfelse>
<cfset arrayAppend(parts, mid(string, match.pos[i], match.len[i])) />
</cfif>
</cfloop>
<cfset replace = callback(parts) />
<cfset start = start + len(replace) />
<cfset string = mid(string, 1, match.pos[1] – 1) & replace & removeChars(string, 1, match.pos[1] + match.len[1] – 1) />
</cfloop>
<cfreturn string />
</cffunction>

Sudoku PointingPairStrategy

The next sudoku strategy is called a "pointing pair" which I'm going to start by generalizing into "pointing triple".  The strategy is pretty straightforward: if, for a given number in a given block, all the potential cells are in the same row or column, then that number cannot exist in any other block's cells of the same row or column.

A pointing pair is easier to see than a pointing triple, but necessitates making the definition slightly tighter: if a block contains only two potential cells for a given number and they're in the same row or column, then that number cannot exist in any other block's cells of the same row or column.

Of course, if you crank it down one more step (to a "pointing single"), you have the definition of a known cell (either a given or one already solved for).  But enough prose, on to the code:

boolean play(Board board) {
  def madePlay = false
  board.blocks.each { b ->
    (1..9).each { n ->
      def cells = b.findAll {
        it.hasMark(n)
      }
      if (cells.size() >= 2) {
        if (cells*.col.unique().size() == 1) {
          // all in one col
          cells[0].col.each {
            if (it.block != b) { // different block
              madePlay = it.removeMark(n, this) || madePlay
            }
          }
        }
        if (cells*.row.unique().size() == 1) {
          // all in one row
          cells[0].row.each {
            if (it.block != b) { // different block
              madePlay = it.removeMark(n, this) || madePlay
            }
          }
        }
      }
    }
  }
  madePlay
}

This is the longest strategy so far, but it's pretty straightforward.  For each block, consider each number.  Find all the candidate cells, and if there are two or more, see if they're all in a single column.  If so, loop over the column and remove the number from each cell not in the current block.  Then do the same check for rows.

Using Groovy's getAt (bracket) notation, I could have wrapped the col and row checks into a single loop to reduce some duplication, but I haven't here.  You'll see that technique in some of the later strategies, however.

Finally, you'll notice that the whole board is iterated over and potentially many plays are made before the method returns.  As such, using each iterators wasn't a big deal.  This is probably somewhat wasteful, because a pointing pair can potentially do a lot of elimination (and therefore falling back to the GameRulesStrategy would be useful), but I haven't done it here.

The raw source (including the test cases) is available at PointingPairStrategy.groovy, should you be interested.

Scaling Averages By Count

One of the problems with statistics is that they work really well when you have perfect data (and therefore don't really need to do statistics), but start falling apart when the real world rears it's ugly head and gives you data that isn't all smooth.  Consider a very specific case: you have items that people can rate and then you want to pull out the "favorite" items based on those ratings.  As a more concrete example, say you're Netflix and based on a person's movie ratings (from 1-5 stars), you want to identify their favorite actors (piggybacking the assumption that movies they like probably have actors they like).

This is a simple answer to derive: just average the ratings of every movie the actor was in, and whichever actors have the highest average are the favorites.  Here it is expressed here in SQL:

select actor.name, avg(rating.stars) as avgRating
from actor
  inner join movie_actor on movie_actor.actorId = actor.id
  inner join movie on movie_actor.movieId = movie.id
  inner join rating on movie.id = rating.movieId
where rating.subscriberId = ? -- the ID of the subscriber whose favorite actors you want
group by actor.name
order by avgRating desc

The problem is that – as an example – Tom Hanks was in both Sleepless in Seattle and Saving Private Ryan.  Clearly those two movies appeal to different audiences, and it seems very reasonable that someone who saw both would like one far more than the other, regardless of whether or not they like Tom Hanks.  The next problem is if they've only seen one of those movies, the ratings are going to paint an unfair picture of Tom Hanks' appeal.  So how can we solve this?

The short answer is that we can't.  In order to solve it, we'd have to synthesize the missing data points, which isn't possible for obvious reasons.  However, we can make a guess based on other datapoints that we do have.  In particular, we know the average rating for all movies for a user, so we can bias "small" actor samples towards that overall average.  This will help mitigate the dramatic effect of outliers in small sample sizes when there aren't enough other datapoints to mitigate them.

In other words, instead of this: \overline{r}_{actor}\ =\ avg(rating_{movie_{actor}})

we can do something like this: n = count(rating_{movie_{actor}}) \overline{r^\prime}_{actor}\ =\ \bar{r}_{actor}\ -\ \frac{(\bar{r}_{actor}\ -\ \bar{r}_{overall})}{1.15^n}

This simply takes the normal average from above, and "scoots" it towards the overall average based.  The denominator is a constant picked by me (more later) raised to the power equal to the number of samples we have.  This way as the number of samples goes up, the magnitude of the correction falls rapidly.  Here's a chart illustrating this (the x axis is a log scale):

With only one sample, the per-actor average will be scooted 87% of the way towards the overall average.  With four samples the correction will be only 57%, and by the time you get 32 samples there will be only a 1% shift.  Note that those percentages are of the distance to the overall average, not any absolute value change.  So if a one-sample actor happens to be only 0.5 stars away from the overall average, the net correction will be 0.465.  However, if a different one-sample actor is 1.5 stars away from the overall average, the net correction will be 1.305.

Of course, I'm not Netflix, so my data was from PotD, but the concept is the identical.  The "1.15″ factor was derived based on testing on the PotD dataset, and demonstrated an appropriate falloff as the sample size increased.  Here's a sample of the data, showing both uncorrected and corrected averages ratings, along with pre- and post-correction rankings:

Model Samples Average Corr. Average Rank Corr. Rank
#566 22 4.1818 4.1310 46 1
#375 12 4.1667 3.9640 47 2
#404 13 4.0000 3.8509 81 3
#1044 7 4.2857 3.8334 44 4
#564 5 4.4000 3.7450 42 5
#33 32 3.7500 3.7424 176 6
#954 4 4.5000 3.6895 40 7
#733 4 4.5000 3.6895 39 8
#330 7 4.0000 3.6551 74 9
#293 5 4.2000 3.6444 45 10

In particular, model #33 sees a huge jump upward because of the number of samples.  You can't see it here, but the top 37 models using the simple average are all models with a single sample (a 5-star rating), which is obviously not a real indicator.  Their corrected average is 3.3391, so not far off the leaderboard, but appreciably lower than those who have consistently received high ratings.

For different size sets (both overall, and expected number of ratings per actor/model) the factor will need to be adjusted.  It must remain strictly greater than one, and is theoretically unbounded on the other end but there is obviously a practical/reasonable limit.

Is this a good correction?  Hard to say.  It seems to work reasonably well with my PotD dataset (both as a whole, and segmented various ways), and it makes reasonable logical sense too.  The point really is that if you don't care about correctness, you can do some interesting fudging of your data to help it be useful in ways that it couldn't otherwise be.

A Drastic Change

So today I decided to upgrade to WordPress 2.9.2 (I'd been running 2.7 since forever), and unfortunately it broke K2, which is the theme I've been using since I switched to WordPress years ago.  K2 was a solid theme, but it started getting rather unstable I thought, so it was hard to get a good release.  And it's really heavy, bloated with JavaScript, and just wasn't what I wanted anymore.  However, with a busted site, it wasn't the time to go theme shopping.  A little digging around the WP core showed that they'd deprecated (and then changed the API of) the attribute_escape function, and K2 depended on the old behaviour (automatic spreading across an array).  A couple quick patches and things were back on their feet.

Being the idiot I am, I figured I may as well upgrade K2 (since it would have those same fixes I made, along with other related ones I didn't catch), and that went horribly.  Their 1.0.3 release pretty much failed, and is even more bloated than before.  So now, with an again-broken site, I went theme shopping.

Fortunately, I'd just received a couple recommendations on a K2 replacement, and Theme Hybrid with the Life Collage child theme appeared to be relatively close to what I wanted.  So after tossing that in, I again had a non-broken site.  After another hour or so of CSS hackery atop the Life Collage CSS (but just the Hybrid HTML), I again have a site I'm relatively happy with.  Unfortunately WordPress doesn't let child themes extend other child themes (only "root" themes), so I couldn't get all the Life Collage HTML goodies (at least without copying), but no great loss there, I don't think.

As an added bonus, the theme supports dropdown menus for the top nav (based on subpages), which is something I've wanted for my project list for a while, but never got around to building with K2.  So one less project I have to do, and that's always a good thing.

The only remaining major issues are that there doesn't seem to be a no-comments template for pages (which is unfortunate, I think), and WP apparently did away with the recent posts sidebar widget (no idea why). [ed. They didn't get rid of it, they just changed it to be a 'postbypost' archive widget.] I'm not sure I'm going to stick with Life Collage's base styling, but I'll probably stick with Hybrid as a root theme.  Both are a little id-heavier than I could wish for, but they've got pretty solid markup which makes styling pretty straightforward.

Just for the sake of completeness, the other couple suggested theme replacements were WP Framework by Ptah Dunbar and Thematic by Ian Stewart.  Of the three, Hybrid looked the best out of the box, I thought, though they were all very similar.  It also happened to have prepackaged child themes, which was a win for me with the state of my site at the time.  But even without that, I think I'd still have picked Hybrid.

Wow Am I UGLY!!

Yeah, so apparently WordPress 2.9 totally broke K2.  My apologies for the horrific appearance of the site, though I'm delighted to say the admin area still looks awesome!  Or something.  I'll get it fixed here shortly, I promise….

UPDATE: apparently WordPress not only deprecated `attribute_escape`, they also changed it's functionality (despite it's widespread use in the app) to no longer correctly process arrays.  I don't know.  In any case, replacing a couple instances of it within K2 with an `array_map` application of `esc_attr` fixed it.  The actual problem was that the BODY classes weren't being correctly generated, thereby throwing off all the CSS selectors.

UPDATE 2: I've been wanting to get away from K2 for a while.  It was awesome, but now it's kind of broken.  And upgrading to 1.0.3 left me with all kinds of weirdness.  So I kicked it to the curb, and am going to be starting afresh.  Should be good.

UPDATE 3: So I've obviously reskinned.  For an hour of hakcing a previously-unknown theme's CSS I'm pretty happy, but definitely still pretty rough around the edges.  But it's good enough, so now it's time for dinner.

Public URLs via Amazon S3 CFC

I made another minor update to my Amazon S3 CFC this evening, this time to support PUTting public objects.  To this point, the CFC left the default ACL on new objects PUT on S3, meaning that you needed an authorized URL (via a query string signature) to retrieve them.  That was the use case I had when I built it, and again every time I've used it on other projects.  However, that's not every use case.

With this latest update, there is now an optional fifth parameter to 'putFileOnS3' named 'isPublic'.  It defaults to false (to retain backwards compatibility).  If you set it to true, the 'public-read' ACL will be placed on the object you're PUTting, making it immediately publicly available for retrieval.  There is a corresponding change to the private 'getRequestSignature' method for accepting headers on requests to sign per Amazon's CanonicalizedAmzHeaders format.

As always, the CFC itself is available (amazon.cfc.txt) and the project page is the place to get latest info/releases.