Automated deployments #1: What’s on your server?

No really, what’s actually on your server?  If you’re first answer isn’t “erm” you’re either very good or don’t understand the question.  Servers are huge, they can store War and Peace on their hard drives without you even noticing.  For that matter, they can store any number of PDFs of books on patterns and practices without any appreciable benefit to civilization, but I don’t think that’s really the fault of the servers.  It’s practically impossible to really know what’s on there.  What’s worse, the way most people do development, they make the job harder for themselves.

I had a meeting with our auditor today and thanked my lucky stars that we had automated deployments.  Automated deployments save an awful lot of effort if they’re done right and really save your hide when people start poking around your process.  Let’s talk about a really simple question, what’s an your server?

If you tell me it’s version 1.2.6, I’m going to have a few questions.

  • What was in version 1.2.6?  Is there a label in source control?
  • Was every file checked in?
  • What build does that correspond to?
  • How can you check that the build is what got deployed?
  • How about the config, is that in source control?  The actual config that’s on the server right now.
  • How do you know nothing’s changed on the server since then?

Look at Microsoft, or any large company, and they’ve got this sorted out.  It’s internal development teams that tend to have a problem.  When people ask these questions:

  • What’s changed since we last deployed?
  • What could this affect?
  • Can we rollback?

You want to have good answers.  And absolutely fundamental to this is: know what’s on your server.  Exactly.

First, you need to have a build server.  Download and love CruiseControl.NET.  Builds on local machines always turn out to have the wrong version, a reference to something that isn’t in source control, a dependency that isn’t well worked out.  A real pain for anyone starting with this is that it turns out your solution files aren’t really as flexible as you’d like.  You can get going with MSBuild, but there’s a reason every open source project uses nant.  (NAnt is far from perfect, but it’s a heck of a lot easier than MSBuild for anything slightly complex.)

Anyway, here are my answers:

  • Version numbers are build numbers.  “1.2” is just for PR, it’s got nothing to do with versioning.  Call it what you like (you can call it Vista if you must) but the real version number is the build number from on the build server.
  • Build Servers will only build files that are checked in.
  • I said that version numbers are build numbers, right?
  • We label every assembly that gets built with the build number (I stick it in the summary in the DLL).  This makes it really easy to just check the version on the server.  Also, we stick the information in an About Box, or a web service call.
  • The actual config on the server isn’t in source control, but the environmental delta is.  The deployment process stamps the config with the information as well.
  • Making sure that nothing’s been changed is harder, because no-one’s written a general tool for doing so, but taking a hash of the directory straight after deployment and checking it each day will catch most of that.  (You can hash files individually for better diagnostics.)  Tracking every setting on the server is probably more trouble than it’s worth, but I do have a tool for downloading IIS Virtual Directory settings to an XML file, because that turned out to be on the right side of the cost/benefit calculation.

Your answers don’t need to be the same, but I guarantee you your life will be easier when you have answers to these questions.  Importantly, the work scales.  The more people join your team, the more this stuff is important.  Incidentally, you can do all of this in TFS.  I know, I’ve done it.  And I’ve regretted not using CruiseControl.NET, NUNIT, SVN and NANT every time.  Open source, depressingly, appears to be better documented than the stuff I paid for.

Mono’s still got a long way to go

Reading this didn’t impress me massively, and not only because it’s a reheated blog post.  Don’t get me wrong, the Mono team has done some superb work, but it’s really not ready for primetime.  Miguel does a phenomenal job of cheerleading, but let’s take a look at this particular example.  That’s not an industry standard benchmark they’re running there, it’s some code on some guy’s blog.  It’s quite a nice and interesting blog, but it’s nowhere close to the mainstream.

Sadly, where Mono still falls down is meat and potatoes issues.  Look at the number of patches the Ubuntu team need to make to each release before they’re happy packaging it.  Look at the weird behaviour problems that the Castle team discuss on their mailing lists (e.g. why on earth does Mono 1.9 believe it can convert from a string to an interface?  Don’t they have a regression test for that?).  Worst of the lot, however, has to be the garbage collector.

Getting the garbage collector wrong is second only to getting the compiler wrong.  People won’t understand what the problem is, but they’ll suffer when it doesn’t work right.  Mono currently uses the Boehm garbage collector, which is a non-compacting, conservative C++ garbage collector.  If you use vanilla .NET, you don’t need to know about the garbage collector for the most part (unless you’re doing stupid things with finalizers), however, if you’re running on mono, the same program that run fine on .NET can give you nonsense like this:  http://www.mail-archive.com/mono-list@lists.ximian.com/msg22436.html.  (Incidentally, the suggested remedy is a sticking-plaster over a sword wound.)

At the moment, the only real solution to this problem is to use allocation patterns that the Boehm GC likes, which is ridiculous to anyone who has stopped worrying about memory fragmentation for the last five years.  In fairness, the Mono Project is planning to address this at some point towards the end of the year.  Then all I’ll be worried about is their quality problems.

Unforeseen Consequences: My lavatory made a funny noise

Now, every in my office has already heard this story, but it deserves a wider audience.

Six months ago, I bought a house.  One of the lavatories made a funny noise when it flushed.  Actually, not so much funny as extremely loud.  This noise would go on for about a minute.  It actually sounded like the house was shaking apart.  I ignored this for months and worked around the problem.  Being a geek, I figured out that running the bath at the same time stopped the noise, so I knew I was dealing with a resonance problem.  I’m not, however, a plumber, and had no idea what was starting the whole thing off.

There are a couple of weird things about the house.  One is that the lower bathroom had a shower head, but no shower rail.  Not particularly wanting to soak my new house every time I used the shower, I used the shower upstairs.  Finally, I (or should I say, my significantly more organised wife) got the plumber ’round.  He installed a shower rail, took one look at the loo and determined it had the wrong washer on it.  Replacing the washer for a fiver, he managed to fix a problem that I’d assumed was going to cost me thousands.

I then went away for a couple of days, and came back to a leak in my kitchen.  Water was seeping through from the ceiling.  I went nuts, thinking the house was about to fall down.  I phoned up the plumber and he agreed to come back on the Sunday morning.  (Our plumber, you will appreciate, is an absolute brick.  Couldn’t praise him more highly.)  In the morning, we started discussing the problem.  Maurice (really) first wanted to check that he hadn’t drilled through a pipe.  He was quite happy to admit that he had done so before, but he doubted this was the problem since we’d have a lot more water leaking.  We then started on a relatively serious discussion on whether it was better to rip up the floorboards or break through the plaster.  Another difficulty was working out from where exactly it was leaking.  Finally, I asked him if it was possible that fixing the washer had affected something else.  Maurice said “No, that can’t happen.  Let me explain why.”.  He lifted off the cover of the lavatory tank, stared at it and said “There’s no overflow”.

For those of you who don’t know, the overflow is a pipe out of the back of your loo that goes outside.  In the event of a minor problem, you end up with water being dumped outside your property.  Since the property can handle rain, it’s not an urgent problem and is easily fixed.  What my loo did, was drop the water from the overflow onto the floor, and eventually through the kitchen ceiling.  Basically, the guy who’d installed it in the first place had done a dreadful, incompetent job.  So now I have a bucket where the overflow should be and another date with Maurice where he’s going to install some overflows.

The reason I mention this is, the experience was nigh on identical to conversations I have with my manager about some legacy systems I deal with:

  • Problems sometimes aren’t as serious as their symptoms suggest.
  • Fixing one thing may highlight a problem somewhere else.
  • Always explain to someone else why something can’t possibly happen.
  • An audit won’t find every problem.
  • You’re always going to get these problems when you’re taking over a badly done job.

And sadly, sometimes you won’t get lucky and will have to rip up the floorboards to figure out what’s going wrong.

How to support Default.aspx in ASP.NET MVC

If you’re trying to use MVC incrementally on an existing project, this can be a bit of pain.  There must be some way of way of getting the routing logic to redirect itself, but in practice the following does the trick (assuming you’re using the standard routing).

    public class HomeController : Controller {
        public ActionResult Index()
        {
            return new RedirectResult("~/Default.aspx");
        }

    }
Technorati Tags: