Automated Deployments #3: The Extend Macro in Binsor

In the previous post, I dealt with the standard .NET config file.  It would be lovely if all we needed to deal with was appsettings and connection strings.  Sadly, the world is more complex than that.  Now, quite a few of the systems listed below (pretty much all of the XML-based ones) give you the option of including their config in the app.config or in a separate file.  I can’t say as I see it makes a blind bit of difference.  Each of them has their own XML schema and I doubt you wish to write a diff tool for each of them.

Castle Windsor has a fluent configuration mechanism and an XML format.  Annoyingly, neither of them support environmental diffs.  I can’t quite believe that this isn’t baked in, but then the major competitors don’t seem to address this either.  Furthermore, since there is no facility within Windsor for producing a container that specializes another container, you can’t use specializations to produce environmental deltas.  In particular, you could have the principal configuration using the fluent interface, keeping with ThoughtWorks dictum of not putting anything into a config that can be hardwired.  You could then specialize using the XML format.  There is, however, one other alternative, Binsor.  Now, Binsor is a true .NET language with a couple of specializations to support windsor configuration (or a DSL, if you prefer).  It supports environmental deltas through the Extend macro.  The Extend Macro isn’t really documented, so here’s a quick guide to how to use it:

MainConfiguration.boo:

def DoConfigure():
    Component "service", IService, ConcreteService:
        standardParameter = "Value the same across all deployments"

Environment1.boo

import file from "MainConfiguration.boo"

DoConfigure()

Extend "service":
    environmentalParameter = "Parameter value is different in Environment2.boo"

 

This is incredibly powerful.  In particular it lets you change your mind about which parts of the config are environmentally driven and which aren’t.  Let’s go through it (with some stuff I learnt the hard way)

  • Binsor allows you to include one file in another.  You need to wrap the code in the main configuration in a function that you call from the environmental diff.  This might seem backwards, but it’s pretty much the only way it can work.
  • The import file syntax supports relative paths, but make sure that you pass BooReader an absolute path to the environmental diff, or it may not work the way you’re hoping.
  • Extend is keyed by the name of the component that is declared in the MainConfiguration.boo file.  Thus, in the example, the ConcreteService class has string parameters for standardParameter and environmentalParameter.
  • There is, of course, the outstanding question of how you identify which environmental delta to use.  You’ve basically got two options here: get the install to rename the delta file to a standard name, or put the name into the appsettings.  (I’m finding it remarkably hard to kill off appsettings completely, much as I try…)

I like Binsor in that it’s a complete solution to the problem.  It is, however, quite a heavy-weight solution, and you can’t mix and match it with the fluent configuration since it’s doing all the hard work itself.  Since Castle Windsor’s ComponentModel class is immutable and doesn’t support specialization, it has to build its own component model in order to support this feature.  That shouldn’t bother you until you try stepping through the code and discover that there’s several thousand more lines supporting this syntax than you were expecting.  A more general difficulty is that it uses Boo’s most powerful and most dangerous feature: the ability to change the way the syntax tree is evaluated.  It produces a relatively elegant syntax, but it’s not documented and there’s no editor support for it that I know of.

Cost Benefit calculations when Developing for Browsers

John Resig has just put out a nice post about browser share.  It’s well worth reading and I won’t repeat what he says there.  I do, however, think the cost benefit balancing graph is a worth examining further.  If you take a look, it seems to say that IE7 is a lot of work, but worth targetting, whilst IE6 is not worth the effort.  However, this is very dependent upon the browser share and the traffic to your site.  The yellow bars, the cost, are pretty much constant across browsers.  The blue bars, the benefit, can be expressed as follows:

Number of users of this browser x Revenue per user for this browser

Now, for a seriously low traffic site, that might suggest that you shouldn’t even bother supporting IE7.  In practice, you will, because you won’t feel like you can take yourself seriously if you don’t.  IE6 becomes a complete waste of time.  If you’re google, on the other hand, all of the blue bars are in the stratosphere, and you should support browsers that people haven’t even thought of yet.

In practice, the cost benfit nearly always favours supporting IE6.  Let’s assume it took three months to support IE6 to the extent that someone could use the site, that the conversion rate for IE6 users is 7% and that you got £60 per action.  (These are reasonable numbers for some dotcoms.)  Finally, assume the contractor that does the work costs £300 a day and demands payment up front.  Then, if you had 137 visitors a month after that point, you’d get a 5% annual return on your investment.  That’s assuming a three-year planning horizon.  Obviously, the numbers go down if you have a shorter horizon and up if you have a longer one.

137 visitors a month is nothing.  It’s more than this blog manages, but a half-decent commercial business with a bit of TV advertising would have thousands of visitors.  You only need 4500 visitors to make 5% back the month you go live.  Obviously, you need to run your own numbers: some people will see the quick numbers I’ve made up and not even vaguely recognize their own business.  But I can reasonably guarantee you, if you’ve got a public website, you’re not going to be able to convincingly argue for the demise of IE6 any time soon.

Technorati Tags: ,

Loving Krzysztof’s blog

This is a straight plug, but I’ve got to say, I’m really enjoying reading Krzysztof Koźmic’s blog at the moment.  In particular, his guide to Dynamic Proxy is excellent.  I’ve certainly learned a lot by reading it.  I’ve said before that dynamic proxy is extremely powerful and useful.  It’s good to see someone who properly understands it giving such a lucid step by step guide to how to use it.  Me, whether or not I keep working on AutoGen, I’ll definitely keeping working with DP.  Sadly, it’s still quite hard to find on google.  Let’s hope this post improves that score.  🙂

Technorati Tags:

AutoGen and the Common Service Locator

One of the nice things about finally finishing a major project is that you get to do some work on side projects you find entertaining.  So, today, I finally finished a rather ugly merge and got a new version of AutoGen for Castle out.  Obviously, it isn’t hard to implement the Common Service Locator in Windsor.  However, since the idea of AutoGen was to allow you to use your own abstraction for DI, and so implementing Microsoft’s seemed like a good stress test.  I’ll take a crack at NServiceBus next… 🙂

I shamelessly ripped off Ayende’s tests for the CSL and ran them against an AutoGen version of the implementation.  I identified a number of issues:

  • You couldn’t specify a service type.  This isn’t important if you’re building an application, because you’d just specify a return type and AutoGen would handle it.  However, a framework doesn’t necessarily know the service type in advance, so framework interfaces need that feature.
  • You couldn’t perform a ResolveAll.  Now, if you return IEnumerable<T>, it automatically performs a resolve all.  This would interfere with anyone who registered an IEnumerable<T> service type, but frankly, Castle’s dependency injection pretty much fails when you do that anyway.
  • The Common Service Locator standard specifies the exception that needs to be thrown.

Anyway, the good news is that it this now works:

container.Register(
    Component
        .For<IServiceLocator>()
        .AutoGen()
        .WithWrappedException<ActivationException>()
        );

 

I also took the opportunity to allow you to override the names of the key and serviceType parameters, both in configuration and using the fluent API, since it seemed likely this would come up when implementing third party interfaces.  I’m not massively happy with the wrapped exception qualifier, it feels a bit too special case (time will tell, it may prove useful in a number of circumstances).  It’s implemented as a separate interceptor.  I’d have liked to have properly separated the concerns of resolution and disposal proxying, but it turns out that this is nigh-on impossible.  Basically, since proxying of concrete classes is fraught with difficulty, to say the least, you need to know what the original service type was in order to create a valid proxy.  It automatically adds in proxy interfaces for anything else the implementation exposes, which means that forwarding now works transparently.

Alt.Net and Adoption

Just read this.  Couldn’t agree more.  Quite a few of the Alt.NET groupings have an explicit bias in favour of people who submit patches.  Now these people are, for the most part, giving away radically great stuff for free.  Expecting something in return isn’t unreasonable.  However, without focus on people starting on the curve, or even halfway through,

My own curve is far from “at Z”.  I “get” why persistence ignorance is important, but work on a lot of old sproc code.  I use dependency injection, but haven’t mastered it.  I implement separation of concerns, but I wouldn’t claim perfection there either.  I’ve got a lot of automated tests, but I still favour state-driven tests rather than behaviour-driven tests.  I’m also aware that many of the bloggers I admire (Jeremy Miller) are significantly further along this process.  I do, however, find it slightly depressing that he believes it’s time to give up on evangelism.  This from the guy who wrote one of the single best introductions to test driven development and MVC design.

Now, I know that some people will have to be dragged kicking and screaming into adopting new techniques.  I think this story is occurring in microcosm in most firms up and down the land.  Me, I’m going to continue on my own journey towards Z, or whatever letter the “coalition of the willing” has reached now, and continue to document what I learn.  Why?  Because, actually, these techniques do make you massively more productive.  Inversion of Control solves a lot of configuration problems and drastic reduces the amount of boiler plate code an SoC design would otherwise entail.  Separation of Concerns makes my code more pluggable, more agile (small a) and just plain easier to understand, even for me.  Test driven development has given me the ability to refactor code that in the old days I wouldn’t have dared touch for fear of breaking something.  In short, it actually works, unlike many of the programming fads that have hit over the years.

Technorati Tags:

How to click on whitespace in HTML

Reading Jeff Atwood’s latest post reminded me of this:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<body>
  <a href='http://www.colourcoding.net/'><h1>
    Can click on whitespace
  </h1></a>
  
  <h1><a href='http://www.colourcoding.net/'>
  Can't click on whitespace
  </a></h1>
</body>
</html>

This is actually a really useful technique, especially if you’re using a background image behind your title.  You can click anywhere on the first line and it will follow the link.  Is it portable?  To the best of my knowledge every browser interprets this the same way.  Is it standard?

Err… no.  Actually, the standard doesn’t even allow you to put an h1 tag within an a tag.  There are no other ways of doing this (we’ll ignore the possibility of using javascript to cloak it).  I think I lost faith with validation at that point.

Technorati Tags: ,

Concurrency begins at home

I’ve been watching some very excited tweets go past about concurrency improvements in .NET 4.0.  I’ve got to say, I’m quite looking forward to hearing about this once the NDAs drop away (not an MVP, nor am I likely to be).  Retlang is pretty much the only game in town for concurrency right now, and it’s quite hard explaining to people the problem it’s trying to solve.  The standard question is “what’s wrong with using Threads”?  The work the Maestro team is doing looks exciting as well, although I’m a bit dubious about the benefits of a language rather than a C# DSL.  (Isn’t it about time they introduced more DSL-friendly syntax, anyway?  I guess that wouldn’t give Don Box anything to do.)

However, what would really make my day is if they fixed the debugger.  I’m running a piece of code with 30 threads here.  It’s rock solid in production.  Pity it crashes after about 15 minutes in the debugger.  I can work around it but I really shouldn’t have to.

Metrics and Money

I’ve been trying to stick together a few notes about what I’ve learnt about process design and implementation over the years.  The “short notes” just keep getting longer and longer.  However, this story is worth repeating and should illustrates one rule that everyone should know by now is: never link a metric to cash.  It seems like the most sensible thing in the world to give out bonuses on the basis of deliverables.  It’s not, it will damage your department and your firm.  If you’re lucky, you’ll know how.  If you’re unlucky, you’ll never find out.

Let me give an example, a friend of mine worked at this firm.  We’ll call him Joseph Developer.  Management, having finally ditched performance-based metrics a year previously (for reasons that would make a good post of their own) had decided that it was time for re-introduction.  With a new and popular CTO keen to make his mark, they sat down and came up with a new plan.

OK, I can hardly tell this story without laughing.  I’m actually laughing right now typing this.  I can assure you that it wasn’t as funny for anyone directly affected by this.  Let me just remind you that this was the best plan the CTO, the COO, the CFO and CEO locked up in a room could come up with.  This was what they pitched to the developers.

  • They were concerned about the quality of their systems.
  • So they thought about linking bonuses to the number of bugs assigned to someone.
  • But they knew that didn’t work.
  • So, rather than that, they decided to target the number of re-opened bugs.

I can pretty much pinpoint that as the point were some people’s nagging suspicion that the CTO wasn’t as good as they hoped transformed into an unshakeable certainty he didn’t have a clue.  Here’s what they’d done: they’d taken a system they knew didn’t work, taken the first derivative and decided to implement that.

Labouring the point

Clearly, this announcement would never have occurred if the managers had actually understood the problem.  So let’s talk for a bit about why you shouldn’t incentivize people on the basis of bug count.

  • What is a bug and what isn’t is ill defined.  That isn’t a problem until you start making dollar amounts (which are about as concrete as you can get) dependent on the distinction.
  • You’ve just created an adversarial relationship between the bug reporter and the developer.  These can develop anyway, but they’re never productive.  By putting money on the line, you’ve guaranteed it.
  • You’ve re-incentivized people to work in a negative, rather than a positive way.
  • Your developers will figure out a way to game the system.

The first two points look like they’re unique to bug tracking, but they’re not.  All of the metrics that you’ve got are indicators of what’s going on, not the unvarnished truth.  All that the bug metric tells you is that a certain number of bugs have been entered into your system.  It’s not a measure of quality and it sure as hell isn’t a measure of productivity.  That’s not just true of bugs, it’s true of the actual P&L of your company.  Don’t believe me?  Ask an accountant.  Or a market analyst who has to decode earnings announcements.  Not even the bottom line is the bottom line.

People get misled by examples from manufacturing and construction, in which well-defined metrics produce well-defined outcomes.  Go and read that Joel article again.  I’d go a bit further and say that such incentives do work in very limited circumstances: where you want the guy to do that and only that.  My old project manager once killed a round of testing simply because it meant he would hit his bonus targets.  Fixing bugs would not.  Sales commissions are brilliant at getting salesmen to sell.  You’d better keep a fairly tight eye on exactly what they’re selling, though.

The adversarial point is equally general.  You’ve replaced a metric which helps you tell what’s going on with a salary negotiation mechanic.  I hope you’re not planning on that metric being used for anything else.  Such as, for instance, bug tracking.  I think there have been enough options scandals by now to emphasize that this is equally true of earnings numbers.

The final point is the one that should make you pause, even if the others didn’t.  Let’s take a look at our four guys locked up in a room.  They were employing more than 50 developers.  Bright as they were, they weren’t brighter than 50 guys with degrees and training in a profession that emphasizes logical thinking.  Actually, it doesn’t even matter that they were bright, anyone could have come up with a way of circumventing it.  Since you’re paying them to do so, you can be reasonably guaranteed they will.

So how many of those objections didn’t apply to the “penalize re-opened bugs”?  None of them.

  • What was an unfixed bug and what was a separate issue was ill-defined.
  • They made a fairly adversarial relationship between QA and development ten times worse. 
  • People regularly worked in a negative way.  Often, more time was sent arguing about the exact status of a bug than fixing it.

And as for gaming the system, well, that’s where the fun really began.

Gaming the system

When I was first told of this, it took me five seconds flat to figure out how to game it.  Just fix easy bugs.  If a label’s wrong, fix it.  Avoid anything involving a nasty interaction or an ill-defined behaviour.  Never mind that those are where the value is.  In fact, the truly pernicious thing about this whole process was that it actively penalized senior, responsible developers who took on hard problems.  Now, as I say, I was lucky, I didn’t have to put up with such a stupid system.  My friend did.

So, I met up with Joseph a couple of months after this had been put in place and asked him how the firm was going.  He told me that the atmosphere was very negative (not solely caused by this decision) and that he’d already given up on getting a bonus this year.  He was two months into the bonus cycle.  This was an extremely talented and conscientious guy, regarded as a star at the firm.  And he’d given up on getting a bonus, simply because he was behaving like a responsible developer and not gaming the system.

Two months later, he left and joined a much better firm.  Yes, eventually he figured out a way to circumvent the policy he was happy with: he quit.  He wasn’t the only one.  And, for the reasons I’ve already outlined, it was the best staff who jumped.

Now, management at this firm clearly forgot Evil Overlord Rule #12, but just because the example’s extreme it doesn’t mean that the point isn’t general.  Incentive structures distort behaviour and de-motivate staff.  Metrics-based incentive structures distort metrics as well.  At my current workplace, I receive a bonus based on how well the firm did and how well my managers think I did.  Yes, I still have targets, and I still try to hit them, but I don’t let my targets interfere with serving the business.

So how’s Joe now?  He’s still at his new firm and very happy.  I tried to tempt to a job that would pay significantly better, but he’s not interested.  Now, that’s a firm that knows how to motivate its staff.

Technorati Tags: ,,

NHibernate Truncates Milliseconds when Using DateTime

It’s even documented, right here.  It’s a bit annoying, seeing as it violates the principle of least surprise, but it does make the DateTime type portable across database implementations.  However, if you want to just use your native precision, just declare it as having “Timestamp” type.  Problem solved.  No need to mess around with IUserType or worse.

Technorati Tags:

Automated Deployments #2: Configuration Management

It’s amazing how much engineering time is spent on arguing about the difference in abstraction strategies, followed by someone saying “just copy the files up, but make sure not to touch the config”.  This is a recipe for disaster.  There are three common failure scenarios:

  • Someone takes a copy of the live system, runs some tests and accidentally enters the test data into the live system.  I once saw that happen with a stress test.  It wasn’t funny.  (In fairness, it’s pretty funny in retrospect.)
  • Someone uploads a debug environment, rendering the live system unstable.  (This is mostly a web-related scenario.)
  • A new version is correctly released, but it required a config change which never made it into the production config.

Now, most people run with this policy because “don’t touch the config” produces fewer failures than “touch the config”.  You could argue that most of these scenarios are associated with not carrying out the instructions to the letter.  However, this is to miss the point.

Successful processes minimize the chance of human error.

If someone forgot a step, and that guy is not a muppet, your process has too many steps.  Our release process has one.  Exactly one.  I loathe processes that seem to have as their principal benefit that you know who to blame when it goes wrong.  I would much rather things didn’t go wrong in the first place.  So, we’re looking for a process that guarantees that the environmental differences are respected, but that changes required by the code are propagated.

Types of Environmental Factor

Configuration management is a big and scary subject, and is the proper preserve of your IT department, not your developers.  However, if you concentrate on just the bits that matter to developers, it need not be that big an undertaking.  Let’s go back to basics.  In general terms, there are three common sorts of .NET application:

  • A standard windows client application.  This includes console and GUI apps. 
  • A windows service.
  • A web site or web service.

For standard windows applications, your environmental delta will usually be in the app.config file.  Unless you have multiple installs of the one service on a machine, it is unlikely you’ll have any environmental changes in windows services.  Web sites themselves are typically identical on all deployment environments.  The fact remains that nearly all installs of corporate applications can be summarized as follows:

  • Copy some files
  • Fix the config file
  • Set up the windows service
  • Set up the service in IIS.

Now, to produce a perfect install, you end up messing around with InstallShield or WiX or some such tool.  However, to cover 95% or more of environmental issues, all you really need is a way of fixing the config file.  I’ll remark at this point that since you have control over the entire ecosystem, you can ensure that your system doesn’t require weird settings you can’t handle.  Equally, I’d go out of your way to eliminate stuff like custom config sections.  They’re more trouble than they’re worth.

In practice, in our environment, we have a phenomenal number of programs are the only config entries we ever change upon deployment are:

  • AppSettings
  • ConnectionStrings (We encrypt the connection strings when we apply the delta)
  • Setting compilation debug equal to false (I can’t stress how important this one is.)

I may one of these days publish the code we use for this (it’s a powershell cmdlet) but the fact remains, it’s easy enough to implement on your own.  Incidentally, I can highly recommend you don’t use the Enterprise Library solution.  It’s quite complex and has weird bugs (e.g. it won’t work on a network file).

Storing Environmental Deltas

When we were designing this system, we consider the following models:

  • We use a model in which all deltas are stored in the same file with the master.  The program then determines which environment it needs to use.
  • The development config is the master, deltas are separate files applied to it upon deployment
  • Deltas are applied to produce all configs, including the development config.  The deltas are applied to a master.config.
  • We use the user settings features built into AppSettings.
  • We just have different configs for each environment.  The deployment process just copies up the right one.

There are die-hard fans of all approaches, but I’ll outline why I believe the second to be superior. 

The monolithic file approach is attractive at first because everything’s together but suffers from catastrophic unmanageability as you get a lot of settings (which is a problem you shouldn’t have, but may have).  Furthermore, there is the inelegance of having to deploy information for one environment to another (unless you write a post-processor, in which case you might as well have opted for alternatives 2 or 3).  The self-discovery aspect is attractive, and the monolithic file is easy enough to put into source control.  Just putting all of the configs into source control has its attractions as well, but suffers from the fact that 90% of the XML will be the same in each file, making it hard to track down the differences.  I prefer a model with explicit deltas.

The built-in features sound attractive, because it feels like Microsoft has already done the heavy lifting for you.  However, you’re pretty much guaranteed to be still modifying the web.config anyway, and you’ve split the config into multiple parts not only for management, but for the deployment environment too.  A lot of people practice this method by having the deltas only present

Finally, we’re left with the choice between having an abstract master file and having the master file be the local development config.  Here, I’d argue that the local development file will be edited by developers directly whether you like it or not.  Best to embrace that than have it as a failure point each time it happens.

Final Thoughts

The level of configuration management you need for .NET apps is pretty easy to implement, which makes it a pity very few people both.  All you really need is a couple of xml pokes and you’re done.  One of the great benefits is that all of your environmental information is in source control (you can even make the program that applies the deltas encrypt the data if you regard that as desirable) which makes it much easier to check things in a large heterogeneous environment.  (Again, not a problem you should have, but a problem you may have.)

And yes, the first failure scenario mentioned at the top is also the reason you should have a firewall between development and production.  Next time I’ll talk about configuration that doesn’t appear in the .NET config file.