To BlogCFC or not to BlogCFC…

Currently my blog (along with my wife's, my sister's, and my sister-in-law's), runs on a custom amalgamation of technologies: MovableType in a 'service' role, custom JSP frontend, custom CFML admin area.

As happens to nearly every small personal project that evolves over several years, the piecemeal nature of the system is starting to be a burden.  So I'm looking into rebuilding, and I'm considering Ray's BlogCFC as a base, which seems to generally be the system of choice for CF blogs.

However, I wanted to solicit some community feedback first, particularly regarding ease of extension.  As well as the core blog, the other three blogs that would be using it must support blog-bound photo galleries, customized skinability, and a few other features, all of which aren't tied to the core premise of blogging, and therefore outside BlogCFC's area of responsibilty.  The trick, of course, is that the users must be unaware (as they are now) that the blogging system is a separate system from the gallery system: they just sign into the admin and do their thing.

So, comment away: I'm listenting….

Neuromancer [Re]Addition

A while back I need to do a standard form POST via Neuromancer, so I'd added a doFormPostRequest method to the JSRemote object.  I just needed it again, and for whatever reason, it hadn't made it's way into the core distribution.  So I merged my modded sources in (I love version control ; )), and thought I'd share.

As before, the updates are in the Subversion repo, or you may use the patch below.  Note that this time it's for js/io/Gateway.js, not RemoteObject.js.

Index: Gateway.js
===================================================================
— Gateway.js (revision 8)
+++ Gateway.js (revision 9)
@@ -161,6 +161,38 @@
};

/**
+ * Method: JSRemote.doFormPostRequest
+ * This method implements a multi-field form submission via a POST,
+ * using the 'fields' object as a set of name:value pairs to pass as
+ * the form fields. It simply delegates to doPostRequest for the
+ * actual processing; the only functionality is serializing the fields.
+ *
+ * Note that this method's parameter ordering does NOT correspond to
+ * doPostRequest's.
+ *
+ * Parameters:
+ * url – the url to POST to
+ * fields – the form fields to POST
+ * handler – the callback function to send results to
+ */
+JSRemote.prototype.doFormPostRequest = function _doFormPostRequest(url, fields, handler) {
+ var body = "";
+ var headers = new Object();
+ var boundary = "neuro" + Math.random();
+
+ for (var i in fields) {
+ body += "–" + boundary + "\nContent-Disposition: form-data;name=\"" + i + "\"\n";
+ body += "\n";
+ body += fields[i] + "\n";
+ body += "\n";
+ }
+ body += "–" + boundary + "–";
+
+ headers["Content-Type"] = "multipart/form-data; boundary=" + boundary;
+ this.doPostRequest(url, handler, body, headers);
+}
+
+/**
* Method: JSRemote.doPostRequest
* Does a simple post request, passing the bodyinfo as the body of the
* request – meaning the only way to get the bodyinfo out is to do
@@ -171,7 +203,7 @@
* func_handler – the callback function to send the results to
* bodyinfo – what to send in the body of the POST
*/
-JSRemote.prototype.doPostRequest = function _doPostRequest(url, func_handler, bodyinfo)
+JSRemote.prototype.doPostRequest = function _doPostRequest(url, func_handler, bodyinfo, extraHeaders)
{
log.info("doPostRequest to " + url);
log.info("using pipe: " + this.connectionid);
@@ -211,6 +243,11 @@
conn.setRequestHeader("XLibrary", "Neuromancer 1.5beta");
if(bodyinfo.indexOf("

Subversion Rules!

I've been using Subversion for a while now.  Not sure exactly how long, but about a year, I'd guess.  Before that it was CVS for a number of years.  I have to say, first of all, if you're not using version control, start.  It's worth a bajillion times more than the few hours it'll take to set it up.  Second, the Subversion guys had their heads on straight.

I just tried externals definitions for the first time this evening, and talk about sweet.  Basically, they let you store (via Subversion properties, aka in-repository metadata), references to external projects subversion repositories, and allow you to transparently work with your multi-repository working directory in totally supported fashion.

Perhaps an example would be good.  I'm working on a CFUG presentation on JS remoting, and I'm using Neuromancer and Script.aculo.us as part of it.  Since both have Subversion repositories, and I have commit access to the Neuromancer repository (and may want to commit bug fixes while I'm working), externals are perfect.  I define a simple svn:externals property on my root directory, and then do an svn update, and BAM, I have my working directory updated, including fresh checkouts of the Neuromancer and Script.aculo.us code as well.  Make some mods, run svn status, and again, all the mods on all three projects are nicely laid out hierarchially, ignorant of the fact that they're source resides in three totallly separate SVN repositories.

Also, if I were to check out a fresh working copy on some other machine, guess what happens?  I also get the two external projects for free, because the external references are part of the SVN metadata, so they're included, and they're versioned.  All for free.

Now this might not seem like a particularly useful feature, but perhaps you have intra-project dependancies, and you need your app to rely on a specific version of a module that is also tracked in your same SVN repository.  Create a tag for the sub module, and create an external definition for that tag.  Then, until someone updates the svn:externals property, everyone will always get that tag of the submodule, regardless of where the submodule's development takes it.  Better yet, when you update svn:externals, as soon as you run svn update on your working directory, you'll magically get the new version of the tag.

Magical…. 

Another Neuromancer Bug Fixed

Chris Philips found another Bug in Neuromancer today.  Computed numbers, returned as members of a struct, were always deserialized as null.  Literal numbers, however, were handled correctly.  While debugging the problem, I realized the methods with returntype="numeric" simply refused to run as well, for much the same reason.  Both glitches are now fixed.

As before, the zip on SourceForge has not been updated, though the changes are available in the Subversion repository.  I've included a patch below for js/io/RemoteObject.js that you can use to get the fix as well.  The patch should work with or without last night's update, but I'd recommend applying them both in order.  If you want the updated test cases, you'll have to hit the SVN repository.

Index: RemoteObject.js
===================================================================
— RemoteObject.js (revision 6)
+++ RemoteObject.js (revision 7)
@@ -24,6 +24,8 @@
var DATATYPE_STRING2 = "xsd:string";
var DATATYPE_ARRAY = "soapenc:Array";
var DATATYPE_BOOLEAN = "soapenc:boolean";
+var DATATYPE_NUMBER = "soapenc:double";
+var DATATYPE_NUMBER2 = "xsd:double";

/**
* Variable: REMOTE_OBJECT_VERSION
@@ -74,11 +76,16 @@
//if(dItem.item(z).getAttribute("href")== null
// || typeof dItem.item(z).getAttribute("href") == "undefined"
// || dItem.item(z).getAttribute("href") == "")
- if(dItem.item(z).getAttribute("xsi:type") == DATATYPE_STRING)
+ var xsiType = dItem.item(z).getAttribute("xsi:type")
+ if(xsiType == DATATYPE_STRING || xsiType == DATATYPE_STRING2)
{
value = dItem.item(z).firstChild.nodeValue;
}
- else if(dItem.item(z).getAttribute("xsi:type") == DATATYPE_MAP)
+ else if (xsiType == DATATYPE_NUMBER || xsiType == DATATYPE_NUMBER2)
+ {
+ value = parseFloat(dItem.item(z).firstChild.nodeValue);
+ }
+ else if(xsiType == DATATYPE_MAP)
{
value = new Map();
//value = "!ref! " + dItem.item(z).getAttribute("href");
@@ -705,6 +712,10 @@
eval(__dfh__variable + " = resvalnodes.item(0).firstChild.nodeValue");
}
}
+ else if (returntype == DATATYPE_NUMBER || returntype == DATATYPE_NUMBER2)
+ {
+ eval(__dfh__variable + " = parseFloat(resvalnodes.item(0).firstChild.nodeValue)");
+ }
//}
//this is a structure (a coldfusion struct)
//else if(complextypeid != null && complextypeid.length > 1)

internetiniu svetainiu ir el parduotuviu kurimas, SEO optimizacija ir kitos reklamos paslaugos konkurencinga kaina https://seopaslaugos.com/seo-optimizacija-google/

Neuromancer Bug Fixed

 Chris Phillips found a bug in the Neuromancer 0.6.0beta today.  If you used the RemoteObjectLoader class (recommended), instead of the raw RemoteObjectFactory (way nasty), it was impossible to create multiple remote objects on a single page, because the RemoteObjectLoader class was not thread safe.  The first remote object created was always returned by every RemoteObjectLoader, ragerdless of what remote object it was supposed to load.

Unfortunately, Rob and I are still getting the project all set up on the SourceForge system, so a new build with the fix included hasn't been released.  As such, the bug fix is only available via Subversion or via the patch file below.  Subversion instructions can be found on this page, and as they suggest, you will want to append "/trunk" to the URL listed in the command.

Since that's not very user friendly, I've included a patch for js/io/RemoteObject.js that will update an existing 0.6.0beta version with the bug fix.  Note that you want to patch just that one file, not the whole directory tree.

Index: RemoteObject.js
===================================================================
— RemoteObject.js (revision 3)
+++ RemoteObject.js (revision 4)
@@ -136,8 +136,10 @@
if(typeof async == "undefined")
async = false;
- RemoteObjectLoader.RemoteObjectFactory.setAsync(async);
- RemoteObjectLoader.RemoteObjectFactory.createObject(RemoteObjectLoader.HTTPConnectFactory.getInstance(), url);
+ this.remoteObjectFactory = new RemoteObjectFactory();
+
+ this.remoteObjectFactory.setAsync(async);
+ this.remoteObjectFactory.createObject(RemoteObjectLoader.HTTPConnectFactory.getInstance(), url);
var self = this;
@@ -143,7 +145,7 @@
checkLoaded = function()
{
- var remObj = RemoteObjectLoader.RemoteObjectFactory.getObject();
+ var remObj = self.remoteObjectFactory.getObject();
var fieldCount = 0;
for(var i in remObj)
{
@@ -160,7 +162,6 @@
this.loadInterval = setInterval(checkLoaded, 100);
}
RemoteObjectLoader.HTTPConnectFactory = new HTTPConnectFactory();
-RemoteObjectLoader.RemoteObjectFactory = new RemoteObjectFactory();
/////////////////////////////////////////////////////////////////////////////////////

Any decent IDE (cough, Eclipse, cough) will have a patch function, or you can use the command line patch utility.

Where have I been?

It's been a LONG time since I've posted, and I feel bad.  But hopefully I'll get back on track.

In the past two months, I've primarily been focused on designing new version of our main application.  For various reasons, we're moving away from CF and to a "pure" Java application.  While I'm not particularly happy to be losing such a fantastic tool, the benefits of the platform we've selected more than outweigh what CF can offer.  I'll still be CF user for a while, at least another couple years, but it'll stop being my primary tool here within a month or two.

In my personal life, the kids are growing, and they're quite happy the weather's making a turn for the better.  Lindsay's quite happy to run around in the back yard, and ride her bike out front.  Emery definitely likes the outdoors as well, and he shows it, even without any sort of verbal communication.

Hopefully I'll get back to blogging regularly, but no promises.  ; ) 

Aggregated by MXNA

I finally started reading MXNA a week or two ago, and thought I should submit this blog to it.  So I did, and I just got my confirmation email back saying I'm in. Yay me! 

 

Atomic Commits to Version Control

As many know, I'm a strong (tee-hee) advocate of version control.  For everything.  Code, of course, configuration files, my GNUCash datafile, etc.  Basically, if it's important, and it's not several gigs of photos, music, or video, it's in version control.  Seems like overkill for a lot of things, but at the very least, it allows me to back up my repository and get all my important data in one fell swoop.

One thing that I try very hard to be diligent about is the 'atomic commit'.  That is to say, when you commit, you're committing exactly one change.  It might be across a bunch of files, but it's one change.  The reason is that if you later realize it was a bad choice for whatever reason, you need only do a reverse merge (merge -rN:N-1) to undo it.  If you don't have an atomic commit, and you only want to undo part of what you committed, then you're stuck with doing that merge, but then undoing part of the merge, which gets really dicey if you've got some mods in a file that you want, and some mods that you don't.  Ick.

I mention this because I wasn't diligent about this on two commits about 3 weeks ago.  And of course, part of one of those commits needed to be rolled back yesterday, and it took the better part of 3 hours to do, instead of the expected 10-15 minutes.  Version control is far too powerful of a tool to squander it's capabilities by being lazy when it comes time to check in.

Bonus points to anyone who got the joke I "laughed" about in the first sentence. ; )

Persistance CFC Generators

Last week a couple friends (Sim and Chris) and I were taking about persistence mechanisms and the benefits of each.  Obviously there's the inline SQL route, which is the more performant, and the most cumbersome to maintain.  At the other end are abstract persistence frameworks such as Arf! or Reactor which are designed for ease of use and maintenance, but not necessarily for speed or complex operations.  In the middle is the concept of ra static generator, which is basically a tool that'll write all your persistence code for you, based on your DB schema, but once written, it performs no management.  You have to write the generator to begin with, of course, but you can also customize the generaated code to match your application.

The question was what's the right solution.  Between the three of us, we decided that all three were.  The main app I work on uses a generator because performance is important.   Sim, on the other hand, did some prototyping with Arf! and decided that he'd just keep the backend for the production app, since it was performant enough.

It's an interesting puzzle, because the maintainability of a managed persistence layer is very attractive, but balancing that against performance can be difficult.  But there's a way to make the static generator method a lot friendlier that I wanted to discuss, because it's a little "weird", and can easily be overlooked (as I did for over a year).

Basically, what you do is use your generators as if they're a third party code source, and generate your CFCs into a vendor branch in your version control system.  Then you can easily maintain custom modifications to the generated CFCs (as almost invariably needs to happen), without it getting in the way of regenerating at a later time.  A good example is adding a field to an entity.  Add the field to the schema, regenerate to the vendor branch, and then merge the mods into your real codebase.

With a setup like that, the gap between a generator and a persistence framework is narrowed considerably, because you can effectively regenerate as often as you like (though it still requires a manual step).  But you needn't pay the run time cost of a framework, which can be important for heavily loaded applications.

Just wanted to throw that all out there as an idea to consider, and as another reason to make sure you're using some kind of version control for all your apps.

Firefox 1.5 Setup

I like Firefox.  A lot.  Enough to want to run three
copies concurrently, each with different options and different
purposes.  One for work, one for my personal stuff, and one for
some monitoring apps.  Main differences include two different
GMail sessions, some different prefs for auto find, and completely
different bookmarks, of course.

With 1.0.7, the setup was simple.  Just create three different profiles, and set it to not load the default profile in profiles.ini.  However with 1.5, that no longer works.  On Linux, at least, you need to pass in a MOZ_NO_REMOTE=1
environment variable.  From reading around, this seems to be the
case for Windows and OSX as well, but I haven't tested.  You also
to need to explicitly pass the -ProfileManager argument
on the command line.  Once that's done, you'll get a prompt for
which profile to launch any time you run your shortcut.  New
windows launched from other apps (by clicking links) will load in the
most recent active browser window, regardless of profile.

This
took the better part of a afternoon to get all figured out. 
Definitely a step backwards over 1.0.x, but whatever.  Note that
the profiles themselves upgraded without a hitch.

One problem this leaves you with is no way of telling what
window is from what profile.  You can usually tell by the
contents, but a more definitive identifier would be nice, and I found
it with the Firesomething
plugin.  The plugin is designed to change the "Mozilla Firefox"
tag in the titlebar to something humorous, like "Mozilla Thunderpanda,"
or whatever.  It has randomization, also, so it'll pick a
different name each time.  That's cool and all, but if you set the
option list to just be a single thing, you can use it to tag a
profile's windows with an in-titlebar label.  Since each profile
has it's own prefs, you can make them all unique.  Very
nice.  Only problem is it doesn't run on 1.5 out of the box. 
You have to unzip the .xpi file, edit index.rdf to change the maxVersion to 1.6+,
rezip, and then install.  But it's a small price to pay. 
Note this is obviously an unsupported hack, but it seems to work just
dandy.

Rounding out my extensions, just for reference, are
Adblock, Tab Mix Plus, and the fantastic Web Developer.  And if I
were on Windows, I'd have IETab in there too.   Since most of
you who read this are developers, use your version control system for
your Adblock rule list.  Makes it a lot easier to maintain,
especially across multiple profiles and/or machines.