Metrics: Signal to Noise

Okay, I promised I’d stop talking about metrics, but it occurred to me that there’s a serious point I hadn’t covered: sometimes your measures are broken, and you need to fix the measure before it’s going to tell you anything useful.  The classic example of this is error reporting: if you’re reporting too many errors, you won’t notice when something serious is going on.  I once saw a system that generated 2400 errors a week.  That’s under unexceptional circumstances.  Assuming you spent 30 seconds on each error, that would take over half your week.

It doesn’t take a genius to figure out that no-one was looking at those errors.  So, if there were a couple of really serious problems occurring, no-one would ever know.  Well, not until they started to get the phone calls.

Thing is, the code entanglement statistics from NDepend look like that: I can pretty much guarantee that 95% of them aren’t a problem.  Thing is, you’re never going to know unless you go through them all.  Oren’s random sampling doesn’t really prove anything other than there’s a lot of noise.  It doesn’t mean there isn’t some signal there.

Personally, I find the idea of code metrics fascinating, but I can’t help feeling that we’re still a long way from having usable ones.  An NDepend-driven refactoring might yield genuine results, or it might be a complete waste of time.  This doesn’t mean that Patrick shouldn’t be trying to improve the state of the art.

Technorati Tags: ,

Maintainability: I think there’s something in the water

Did someone declare it “talented developers talk rubbish” week when I wasn’t paying attention?  Maybe I wasn’t on the mailing list… *sigh*

First we had the extremely smart Frans Bouma talking about the importance of proving your algorithms as a development methodology, which at least had the virtue of being funny.  We then had one of the single most productive developers in Alt.Net talking absolute garbage about maintainability.  At least Patrick Smacchia is still talking sense.  Let’s go back to what Ayende’s saying. 

Maintainable is a value that can only be applied by someone who is familiar with the codebase.

This is the exact defence I have seen of most of the worst systems I have ever encountered, from dodgy spreadsheets to distributed Access macros to over-engineered C++.  I have personally, to my shame,  delivered systems that had exactly these problems: they were well architected (to my definition of a good architecture), they were easy to modify (by me), followed consistent conventions (known to me).  That’s not a good system.  Even if you replace “me” with “us” you haven’t got a good system.  This is what I would term the “I’m incredibly smart” anti-pattern.

The problem with the “I’m incredibly smart” anti-pattern is: often the projects using it have incredibly high velocities.  Until someone leaves.  Companies often don’t have too much of a problem with this, especially during a recession.  Also, they know that the next bunch of developers who work on it will probably want to either a) rewrite everything or b) run the project in maintenance mode until it’s as bad as any legacy system.  The good news is that the latter scenario definitely isn’t happening to NHibernate.

Part of the problem is that maintainability should be a concept that is separate from the IQ of the developers:  NHibernate is developed by extremely smart people, that shouldn’t have any impact on its maintainability as a code base.  However, this doesn’t mean that Joe Graduate on his first coding job should be able to pick up and modify the code.  Maintainability isn’t a case of the lowest common denominator.  If so, we’d have to ensure we never used the following:

  • yield return
  • lambda expressions
  • LINQ
  • and frankly, NHibernate

I’ll talk a bit more about the middle ground in a later post.

Technorati Tags: ,

Don’t register controllers as singletons in your container

This is just me documenting a particularly weird behaviour of ASP.NET MVC.  Usually, it doesn’t matter, but if you used your own controller factory, like with MvcContrib, it’s a pain to diagnose.  Basically, if you re-use a controller, the ModelBindingContext’s values are cached.  This is, to say the least, freaky, especially since you can see that the url has changed.  So, you fire up one page, move to another id and get back the first page.  In short, register controllers as transient explicitly.  If you’re wondering how I found this out, consider that AllTypes in Castle Windsor registers services as singletons by default. 

NDepend Maintainability Metrics

Now, I have some contempt for code metrics, ever since I discovered that TFS thought that my worst code was a constructor that checked its parameters for null.  Metrics are useful indicators.  Profit is a metric used to measure the health of a company.  It’s not always useful, but that’s the nature of metrics, they’re indicators, not hard and fast rules.  I thought it might be interesting to examine Patrick’s original post about NHibnerate 2.1 which started the whole recent argument about maintainability.

First, he took a look at the number of changes to the code base.  There’s been a phenomenal amount of work done here, and I don’t regard that as a problem.  It still passes pretty much the same unit tests.  Replacing a dreadful piece of code with a good piece of code that satisfies the same conditions doesn’t make it less maintainable, it just means Steve Strong is smoking.  However, metrics are pointers, and the sheer number of places changed by the HQL changes suggests the possibility that the old HQL code was a mess.  This isn’t necessarily the case, and I’m not qualified to judge, but that is what the metrics are suggesting.

Next, he took a look at the assembly dependencies.  Here the story is that the code base is improving: the dependencies on Castle have been removed, part of the project to allow pluggable interceptors, and a dependency on ANTLR3 has been taken.  This isn’t a problem: it doesn’t introduce an externality and it doesn’t affect user preferences: no-one’s about to demand a different HQL lexer, but plenty of people wanted to plug in their own proxy framework.

Entangled Code

Okay, now we get to something interesting: Entangled Code.  Patrick’s been pushing the idea that you shouldn’t have namespace circularities for quite some time.  It’s not standard practice (Udi Dahan does it in nServiceBus, albeit in a bizzare and inconvenient way), but actually it’s probably a good idea.  I think it’s an idea that would gain more traction if an open source tool integrated the concept directly into Visual Studio. 

But entanglement at the class level?  As Ayende points out, it’s often the only way you can get things done.  No-one worries about circular dependencies with inner classes, and they shouldn’t.  Since the difference between an inner class and an external class is often file size, I think a two element cycle is completely ignorable.  On the other hand, some of the dependencies are solvable:  the dependency of SessionImpl on SessionFactoryImpl, for instance, is only really required by serialization.  This is solvable.  Whether you regard it as a problem is quite another matter.  With my purist hat on, I’d certainly prefer it was separated out, but I’m sure many would argue it doesn’t much matter.

The truth is: no-one’s going to address this in a hurry.  Whatever the costs of having your code all over your namespaces, it’s probably smaller than the cost of diverging too far from the Hibernate source base.

Breaking Changes

Well, first let me say a breaking change is a breaking change.  On the other hand, I’m not sure I’d be too bothered about

  • Removed public types that had no public constructors and were never returned with their concrete type
  • Changing interfaces:  interfaces are brittle, abstract classes aren’t.  We know that.  It’d be nice if there was a public document that explained NHibnerate’s upgrade policy, but I’m not going to lose sleep over this.

There’s a question of ALT.NET philosophy here.  If you really wanted to avoid breaking changes, you’d use abstract classes for everything just like Microsoft.  The cost of the occasional breaking change isn’t really that high.

Truth is, if you drill in, Oren’s right not to worry.  Most people are going to drop in the new assembly and not notice the difference.  The one thing they’re really going to notice, the requirement to specify a ByteCodeProvider, is completely intentional and isn’t picked up by any of these metrics.  Most of the other “breaking changes” aren’t going to break real code.  NHibernate actually has a very strong story here: the release notes highlight all of the likely breaking changes.

Code Metrics

If you actually take a look at Patrick’s report on “methods that could do with refactoring”, you’ll see that a particularly large offender is one of SessionFactoryImpl’s constructors.  Personally, I think that many code metrics don’t take constructor injection into account:  there’s certain patterns of behaviour you expect to see in a constructor that send cyclomatic complexity through the roof.  It doesn’t do the code any harm, though.

Measures such as cyclomatic complexity sometimes punish desirable behaviours.  It’s time we got better metrics.  For example, I don’t believe checking a parameter for null and then throwing an ArgumentNullException should make a code quality metric look worse.

A Bad Example

NHibernate was picked by Patrick for this analysis because it’s a large, well-known open source code base.  However, unfortunately it’s a singularly bad code base to pick for this kind of analysis.  Like it or not, NHibernate’s maintainability is directly affected by its similarity to the Hibernate code base.  A metrics-driven refactor would actually decrease project velocity, which is ultimately all we care about.

 

A New Subtext Skin

If you’re actually looking at this page on the web, rather than viewing it in Google Reader as my stats say is more likely, you’ll have had a bit of a surprise yesterday.  This is why I’ve been so quiet recently: I estimate that I’ve spent something like four man days on this skin and I’ve got to admit I’m quite exhausted.  Although Origami is a good professional skin, half of the .NET internet is using it and my customizations probably weren’t noticeable to most of the users of the site.

Let me set out what I wanted to achieve with the redesign:

  • I wanted to make the text easier to read.
  • I wanted to demonstrate the use of CSS to produce a “designery” look, without using Photoshop.
  • I wanted it to look good in all major browsers, but I wasn’t too fussed about pixel accuracy.
  • I wanted to try to emphasize engagement.  So, comment and feed subscriptions are now more prominent.

How it was done

This was developed in the classic fashion of ripping off the Naked skin.  Now, you’ll know from my previous posts that I’m not a designer.  However, this time I had help.  A lot of help, actually

  • Colour Scheme Designer is a god send.  Any problems you have with pink are with me, not the tool.
  • Equally, Pattern Cooler is a phenomenal site which does something too few “helper” sites do: work with you to produce the effect you’re after.
  • Gradient images and the bullet points were generated using the first tool I found on the net.  Although it’s basic, many of the others are worse…
  • Rounded corners were achieved using Rick Strahl’s CSS3 code.  It doesn’t work in IE7.  I don’t care.
  • Tim Heuer’s original Origami skin was an invaluable resource for figuring out “just how do you achieve that”.
  • I don’t have a good reference for the alpha blending, so I’ll write that up separately, but it’s another CSS3 feature
  • The menu bar is pure CSS, with a little help from some 1px wide gradient images.
  • I don’t think I’d have ever got live comment preview working without this post.  Which is ironic, because it’s obvious now.
  • Lea Alcantara‘s recent article for a list apart was fascinating.  I have to admit, I tried Trebuchet for the main text, but went back to Verdana.
  • Highlights are often achieved using CSS text-shadow, which has the advantage over changing the font that it doesn’t affect layout.
  • Gareth Slinn solved a particularly nasty and portable bug in dealing with comments.  If you want to know about accessibility, he wrote the book.

Thanks should also go to the Finn Brothers and Thelonious Monk, without whom I seriously doubt I’d have stuck at it.

There’s still plenty of bugs and rough edges, but I thought it about time I released:

  • It doesn’t look quite right in IE7.  I know, I should spend more time with it.
  • The code sections should be completely opaque, but they’re not, which is impeding readability.
  • The RSS icon doesn’t blend into the background.  I suspect that I’ll just live with that: it’s that or learn Photoshop.

All in all, Subtext skinning is an amazing demonstration of just what is possible with ASP.NET data binding if you’re committed.  I’d still prefer a pull model where the skin was given an API for rendering the page, though.  Certain things, such as customizing the date format, are pretty much impossible given the current code base, which is a pity.  I imagine Phil has this planned for version 3…

However, the more time I spend with Subtext, the more impressed I am.  It makes writing a blog engine look easy, and the more you learn about it, the more you realize it’s really not.  There are far more pages in Subtext than you might think at first.  I may still have missed one…  or two…

Anyway, I hope you like the new look.  I’ll get away from meta and back to content tomorrow, I promise.

Technorati Tags: ,,

NHibernate Code Quality Measures

I’ve already talked about Patrick’s measures of code quality.  His approach is to take a very direct mathematical and analytical approach to it.  I appreciate the sophistication of the tools he’s using, but I think that the results of this kind of analysis need to be treated with the same level of suspicion as an analyst treats a set of accounts.  e.g. Cash flow is hard to fake, but why are the accruals so low?

Ayende, on the other hand, argues for a very different approach.  He lists as his measures of code quality:

  • Peer code review (especially for things that are either tricky or complex)
  • Reported bugs
  • Ease of change over time
  • Quality of deliverable product

Let’s be frank, only one of these is actually a metric: reported bugs.  The others are a bit confused.

Peer Code Review

Let’s talk about peer code review.  Peer code review is absolutely best practice, it’s a cracking and important way of improving code quality.  But it’s a measure, it’s a practice.  Now, there are measurable things that we can ask about the reviews:

  • How often are code reviews carried out with developers in the same room?
  • How often do code reviews identify a bug?
  • How often do code reviews result in a change being rethought?

Even after that, you’d need to be very cautious about what you do with the metrics: a 0% defect rate and a 100% rate would probably both indicate code quality problems.

Code Quality Outputs

Two of the other “measures” that Oren refers to are actually the goals we’re trying to achieve:  ease of change over time and the quality of the delivered product.  Now, ultimately these are all we actually care about.  Code quality itself is merely a means to an end: getting things done.  But again, we have two problems: there’s no actual measures here, and we’re not actually looking at the code quality itself.  Let’s remind ourselves of Martin Fowler’s dictum:  you can’t measure productivity.  Now, with an open source project, we have some advantages over the classic corporate development scenario: mostly that people are less likely to game the system.  However, there are ways of achieving these aims that don’t involve improving the quality of the code base: I’ve improve the project velocity and project quality of legacy systems by spending a lot of time developing test suites for them.

Both of them suffer from the serious question “relative to what?”.  It’s taken NHibernate 2.1 two years to hit GA.  Does that mean the code quality is bad?  No, probably not.  There are any number of possible reasons for this: the changes were large and complex, the contributors couldn’t devote much time to it, sticking with 2.0 slowed things up, the project admininstrators didn’t regard going GA as a priority.  I’m sure you could think of more without much trouble.

You can’t measure project velocity “in general”, because it depends what you want to do to the codebase.  The release notes testify to a phenomenal amount of number of improvements, but the criteria API still isn’t LINQ-complete.   Why is this?  Well, it’s because the criteria API was developed well before LINQ was on anyone’s radar.  With the best will in the world, sometimes you end up with the wrong abstraction.

Reported Bugs

Now here, we have something we can measure.  I’ve already spoken about the dangers of doing so, so take that as read.  The problem is, a quick look at JIRA doesn’t really seem to tell a good story.  More than 10% of all logged issues are open.  The vast majority of those are unassigned.

Some of these are really arguable.  The two most popular open issues are a problem with Oracle stored procedures and a desire for the lazy loading of columns.  Both can be filed under the category “working with legacy databases”.  Do I care?  No.  Do the people voting on the issues care?  I’d say yes.  Have they submitted a patch?  No.  But then, submitting a patch would require you to be up to speed with the code base, which is where this whole discussion began.

So what is NHibernate’s Code Quality, then?

Frankly, I don’t know.  A naive scan of the code reveals a code base structured according to good engineering practice, but that won’t reveal indirect and subtle dependencies.  Ayende isn’t worried about the direct dependencies that do appear in the system, but it’s quite rare to meet a maintainer worried about this before he’s advocating a wholesale rewrite.

NHibernate, as a live port of a Java project, is special.  Code quality, as with everything else, is context sensitive.  However, whereas I’m not convinced Patrick’s metrics necessarily tell the story it initially appears, I’m still not convinced NHibernate has any measures of quality.  That’s not the end of the world: you don’t necessarily need a process for something that isn’t broken.  I’d keep an eye on JIRA, though.

Next, I want to talk about what readability and maintainability actually are.  And I don’t want to talk about metrics.

Common Singleton Patterns

Another thing that we covered at my recent presentation was common use cases of Singletons.  The problem with artificial examples is that you’re often presented with objections that the point you’re making is restricted to the particular example you’ve given.  So, instead, let’s take a look at some of the most common usages of singleton patterns and explain why you’re breaking your code to use them:

The Current User

I’ve lost count of the number of times I’ve seen the current logged-in user accessed through a static method.  It’s extremely tempting: there’s a phenomenal number of parts of your system that rely on knowing it, and you don’t really want to keep passing the user around everywhere you go.

The problems start when you look at the implementation.  Chances are, if you’re writing a web application, you’re pulling that information out of the HttpContext (or worse, using a component that you don’t know uses the HttpContext).  The first time you try to add in some batch processing in a console application, you’re going to discover one of your main dependencies just fell away and you don’t have a backup.  You could try simulating HttpContext.Current, and you might even succeed (I doubt you’ll enjoy it) but you’ll now have a console app that pretends to be a web application just to support some code that you’re already starting to think of as legacy.

However, even if you could deal with this, you’ve got other problems.  What happens if you wanted to be able to impersonate another user?  It might not sound like much of a concern right now, but wait till you’re supporting the application and want to see what the other guy can see.  Sure, you could deal with this by hacking around with the static methods, but it would have been a whole lot easier if you’d just passed the correct routines what user you wanted.

Another concern: sometimes a workflow has to go through multiple users, an approval process is an obvious example.  Do you really want user A to ask user B to do something and then discover user B doesn’t have the permission?  Much better to be able to pass a user other than the current user into the permission logic and let user A be able to see who exactly can help him.  All of a sudden, your permission system doesn’t rely on the current user at all.

You might be thinking that all of this could be avoided by designing all of these features in at the start, but you’re going heavily into waterfall thinking there, a methodology that has been comprehensively found wanting.  Better is to start with a flexible design that allows you to change behaviour when you need to, and for that you need to be passing around instances, not calling static methods.

The Current Time

If you’ve got DateTime.Now in your code, there’s a good chance you’ve got a bug nearby.  Think about what happens when Daylight Savings Time kicks in.  However, even if you’ve changed your code to read DateTime.UtcNow, you’re still going to have all sorts of problems.  Here’s a concrete example: I have a order processing system with a batch job for Day End.  It uses the current time all over the place, including the truly basic task of working out what day it needs to be processing.

So, what if I want to run the batch on specific data?  On a day anything other than today?  Well, I’m going to have to change the system clock.  You thought it was bad trying to fake an HttpContext, faking the current time is much, much worse.  You’re actually messing with the BIOS.  All because you didn’t write the following code:

private interface IDateProvider {
    DateTime Now { get;  }
}

You can go significantly further down this road: I have unit tests in one project of mine that explicitly tests whether or not the code behaves correctly in Jersey City.  I wouldn’t be able to do that if I hadn’t abstracted out the concept of time from the system.

Logging

It’s amazing how many people approach logging through a singleton approach.  If you’re using a framework, you might not care about this:  there are people who’ve pretty much written the last word on logging and all you need to do is write a configuration file.  The very completeness of the solutions can lead you to thinking that static methods are the right solution for the problem.  Actually, they’re not.  The completeness of the solution has merely minimized the damage.

In our firm, we’ve got a set of static methods used to write to the event log.  Now, unlike using something like log4net, this isn’t a massively configurable and complete solution, it’s just what the developers at the time wrote.  So, you can’t affect policy: you can’t, for instance, filter out logging on any level other than changing the logging library.  You can’t disable logging if you’re running tests.  In practice, you can’t do very much at all.  Contrast this with the solution in one of our fix feeds.  It uses log4net, but more than that, it uses Castle’s logging facility.  Here, we get a logging object passed into the constructor of the class that needs logging.  This logger is specific to the class that is using it: try implementing that using static methods.  (You can: you just need to either pass the caller into every last call or walk the stack trace.  Neither solution is desirable.)

Again, the singleton pattern actually makes things worse, not better, for logging.

Configuration

Anyone whose dealt with .NETs config files for a while will have come across some standard problems:

  • The typeless nature of AppSettings is a pain in the neck.
  • Access to configuration settings can come from anywhere.
  • You’ve often got to include multiple different settings for the same value, to support different assembly’s interpretation.

The second two are because ConfigurationManager is a singleton.

Let’s see how this happens: ConfigurationManager exposes a public static method for AppSettings.  Anyone can use AppSettings however they like. and guess what?  They do.  Now, here’s a common approach to dealing with this problem:

  • Only one class can access AppSettings.
  • Often, this class is a static class, so only has static accessors.
  • Any conversion to correct types is handled by this class.

Now, this is much better, and addresses the previous problems, but there’s still some problems.  The first comes directly from the singleton nature of the solution.  For instance, if you’ve got a single database connection, it might not seem like that big a problem, but the day you have two instances and want to load data from one to the other, you’ll discover that the fact your data access is hardwired to a particular configuration setting is causing you problems.  This is because you’re still breaking the Single Responsibility Principle: your data access class shouldn’t be making decisions about how to handle configuration, and accessing a static method on a configuration class isn’t different from reading the configuration setting directly as far as dependency management.

Another problem is that having a “configuration” class doesn’t really scale: the more assemblies you have, the less likely they are to want to share their configuration settings.  Now, you can always just simply have separate configuration settings for different concerns.  This is actually a good idea, but it still doesn’t deal with what happens when you actually wanted to share configuration settings.

Truth is, most classes and even most assemblies, shouldn’t need to know about configuration settings at all.  Configuration isn’t like logging: logging decisions have to be in the class doing the work, configuration decisions don’t.  Why not just pass the connection string into the constructor?  Same with the mail server, same with the retry policy.  Now, the only place that needs to read configuration is the Main method.

It’s all Microsoft’s Fault

Bill actually has a lot to answer for on this.  In each case we’ve deal with, the design of the .NET API has led us into long-term problems for a bit of short-term gain.  Developers often recognize that they’ve got a problem with HttpContext, the current time or configuration soup, but they don’t know what to do about it.  The .NET framework has lots of these somewhat curious decisions: an API is developed for the lowest common denominator, with the understanding that more sophisticated developers will code around it.  At least it’s usually possible to do so.  Static methods, where they have to be used, can usually be wrapped in instance methods with relatively few ill effects.  In the first case, a simple IIdentity interface which returns the name of the current logged on user can hide an awful lot of HttpContext-related problems.  But you do need to understand that this coupling happens, that it’s damaging and how you can avoid it.

To re-iterate, singleton patterns are dangerous, even those that Microsoft have implemented.

Microsoft’s Provider Pattern: A Bad Idea Carried Out to Perfection

Sun has always felt a need to educate its developers.  Sometimes this has led to Pearl Harbour scale disasters like J2EE, but it has also produced an extremely technically literate community.  On the other hand, sometimes you wish Microsoft didn’t even try.  I’ve pretty much come to the conclusion, for instance, that the Microsoft Enterprise Library is the Wrong Thing.  Every so often, we come across stuff which falls under the heading of “mistakes you need a PhD to make”, as Francis Fukuyama describes his advocacy of the invasion of Iraq.  The provider pattern is top of my list here.  If you’re not familiar with this, it’s a Microsoft-specific form of pluggable singleton.  It’s a singleton by virtue the “default provider” mechanism.  It’s extremely over-complex and, in my experience, just plain doesn’t deliver any benefits that plain old using constructors wouldn’t achieve better.

By combining the singleton pattern with a pluggable architecture, they hoped to draw the poison from the pattern.  Anyone who’s used it will know this isn’t the case. 

  • Sometimes the pluggability just plain fails: try find the parent node from a plugged sub-sitemap. 
  • Sometimes its insistence on using concrete types for everything makes your code nigh-on impossible to implement (especially if some third party made the same decision…)
  • Since it’s a singleton and hence can have shared state, you need to be writing thread-safe code.  Not a trivial task for a neophyte developer who just wanted a bit of pluggability.
  • Since it doesn’t have a coherent dependency injection model, you often end up using the Microsoft configuration model to get anything done.  (You do get a set of string name/value pairs, but any complex dependencies will fall down badly.)
  • Worst, when you finally discover that you actually wanted two of something, you get reminded that the provider pattern remains a special case of the singleton pattern.

It is in many ways really impressive, but that’s what makes it especially pernicious: it picks up a lot of developers who are trying to improve and leads them down blind alleys.  You can spend a lot of time supporting a provider pattern.  When you start to figure out that it’s not really paying back the investment, you’re going to feel that much of this patterns stuff is just nonsense.  Tell me you don’t know a developer like that…

Ironically, you know one really obvious user of the provider pattern?  Subtext, the blogging engine that powers, um, this site…

More about Singleton and Constructor Injection

So, I gave the first of my talks about Design Patterns last week.  I concentrated my attention on the Command and Singleton patterns.  My colleagues weren’t particularly interested in the Command pattern, but my remarks on the Singleton pattern raised a lot of interest.  For many, it was the first time they’d really heard someone come out and say that static methods were a bad idea.  It was ironic that Max Pool was blogging about the uselessness of evangelism on modern programming techniques while I was having a positive experience doing exactly that.

The thing is, everyone is used to a certain way of doing things.  They know that using static methods and shared state take time.  They know that they always end up with dependency soup, but it’s usually thought that this is just what programs are like.  To a certain extent, it’s always going to be hard to eliminate externalities, but it’s a lot easier than most people expect.

Constructor Injection, equally, is really easy to explain: you just pass things into the constructor.  Developers who write a lot of tests can instantly see the advantages of doing things that way.

None of this makes it easy to be the guy in the room saying the exact opposite of what most people expect, but it’s very rewarding when it comes off.

Here’s some talking points:

  • Business requirements change, they usually change in a way you’re not expecting.
  • If you only need one instance, creating it in the Main method and passing it into the objects that need it is much more flexible than using a Singleton pattern.
  • If something is public, it will get used, you can’t create a static method and then say that people shouldn’t use it.  They will, and it’ll be your fault.
  • Constructor Injection is a very low cost thing to implement when you’re writing new code. 
  • Refactoring old code to use it is much harder, but that reflects the refactoring challenges inherent in Singleton-style code.
  • If you’re passing a lot of objects down a function chain, that’s a code smell.  Chances are that the “group of objects” is a good candidate for a class.  Once you understand what that class is actually called, you’re on your way to a better design.
  • Passing lots of objects in constructor chains isn’t as easy to deal with.  Dependency injection containers make this problem manageable.  (Amongst other things…)
  • Evaluating all of your dependencies up front can lead to problems with circular dependencies.  Usually the best way to deal with this is to redesign objects so that they don’t have circular dependencies, but property injection can help in ugly cases.
  • Ironically, developers often spend a lot of time trying to think of the best way to make object interact, how to load configuration settings and so on.  Constructor Injection makes this simple: it’s always in the constructor.
  • Constructor Injection isn’t quite the end of the story.  If you actually need more than one object, we need to start talking about abstract factories.

The sooner you start using constructor injection, the sooner refactoring your code will stop feeling like playing Jenga.

Technorati Tags: Singleton,Inversion of Control,Abstract Factory

A Brilliant Diagnostic in Castle Windsor

I’m not massively fond of Castle’s diagnostics.  There’s certainly no general framework such as StructureMap has: you just ask for something and wait for the inevitable exceptions.  However, the guy who wrote this bit of code for PerWebRequestLifestyle will be bought a drink if I ever meet him:

image

It’s amazing how often people will write FAQs explaining obscure error messages when sticking the diagnostics directly into the code would be more convenient, both for them, and the users.

Technorati Tags: Diagnostics,Castle Windsor