Industry Worst Practice

I talked last time about why I’m suspiscious of standardized practices.  The fundamental objection is that they lack context.  However, sometimes someone manages to come up with an idea that’s useful in no contexts whatsoever.  We’re talking anti-pattern bad.  The following story is true.  The names have been changed to protect the guilty.

GUI Design Methodology

John Developer joined a software consultancy.  They seemed pretty cool, had some interesting projects in the pipeline and used tech he’d got experience in.  Being pretty much a one man hacker at his previous firm, he wanted to improve his skills and work on bigger projects.  So it was something of a shock to discover that he was being referred to as “The Guru” after his first week.

Now, having stared at the code for a while, he couldn’t understand what code like the following was for

Const f_S017_c = 17

Yes, this was VB3.  That is absolutely no excuse for bad design.  He couldn’t understand what all of the switch statements were for, either.  It was then that, for possibly the first time in his life, he decided to read the documentation.  The documentation was pretty standard at first: you’d see a screen shot of what the UI was meant to look like.  There’d then be a quick explanation of what each control on the form did, along with the standard redundant explanations of what “OK” and “Cancel” did.

The next page was a spider’s web of a flow chart.  He asked his manager what it was: “Ah, that’s the state machine.”.

Flow Charts as Standard Practice

Here’s how you were meant to design a screen:

  • Mock up what the screen should look like (an old RAD design practice)
  • Identify the modes and transitions the screen should have. (e.g. which tab is showing, whether you’re adding or modifying data)
  • Build a flow chart that demonstrates how the screen works.
  • Implement the flow chart.

Yeah, you remember flow charts.  You probably saw some when you first learned to program.  Do you remember how to do a subroutine in them?  Of course you don’t because you can’t.  Flow charts and state machines are goto writ large.  And as soon as you gain any complexity, your state machine gets huge.  Imagine a screen with three tabs.  Imagine it can be used for adding or updating data.  That’s six states right there.  Imagine if one field is sometimes disabled.  That’s twelve.  State is a good concept and appropriate in many circumstances, but orthogonality is important.  There’s a reason classes are allowed multiple fields.

Step 3 was where the true brain damage occurred, but the implementation was, if not as bad, hilariously funny.  Bear in mind that this is all implemented on top of VB3, an environment in which event handling was the dominant idiom.  Each item in the flow chart was numbered.  Now, it’s bad to have magic numbers in code, so constants were used.  So, if you take a look at our code above, f_S017_c can be read as “field representing state 17 which is a constant”.  I hope that’s clear now…

Then, of course, when you actually need to branch, you need one of those huge switch statements.  Of course, you only put in the states which are possible.  Unfortunately, John quickly discovered that impossible states ran through the switch statements with alarming regularity.  These resulted in exactly the sort of code fragility you’d expect if you decide to ignore pretty much every serious thinker since 1968.

It wasn’t just the screens either.  Entities were designed in the exact same way.

Burn It

There are uses for flow charts, but in general terms I’d use them in informal, not formal contexts.  There are definitely uses for the goto statement (as I seem to have to explain every time I use one).  What made this particular practice exceptional was its cargo cult ability to impose out of context ideas from software engineering 101 and ruin developers lives.  But what’s really interesting about it is the way it was exactly the same as other standardized practices.

  • It seemed perfectly sensible and plausible to non-technical people.  (It used all the right words.)
  • It was inconsistently applied.  (Because sometimes you couldn’t get work done with it.)
  • Despite the phenomenal amount of documentation produced, it never exactly corresponded to the actual code.  (The only people who knew this, of course, were the coders, and they knew better than to mention it.)
  • When a production bug was found, it was inevitably found that some deviation from the system had occurred.  That violation, and the violator, got the blame.
  • Even given all of that, quite a lot of people wouldn’t hear a word suggesting that you should violate the process.

Luckily, this story has a happy ending.  Six months after this, with some fairly patient and low-key lobbying, John got asked to a meeting to discuss what they should do with their development methodology.  He said “Burn it and bury the ashes”.

They took his advice; he got a raise.  He hadn’t actually solved the problem of Big Upfront Design, but it was a start…

Technorati Tags: ,

Industry Best Practice

You may have noticed my posting’s been a touch light recently.  There’s an embarrassingly good reason for this: my computer blew up during the snow.  For one reason and another, it has taken me an absolute age to replace it.  Instead, I’ve been using my wife’s old laptop, which isn’t really up to serious development.  On the other hand, it’s made me a voracious reader.  In particular, I finally read Paul Graham’s love letter to Lisp, which contains a passage that I think sums up why discussions of standardization always leave me feeling uncomfortable.*

I believe this term [industry best practice] was originally used to describe accounting methods and so on. What it means, roughly, is don’t do anything weird.

The problem with gut reactions, of course, is that they’re hard to explain, but I think Paul nailed it right there: I’m not an accountant, and I’ve no interest in behaving like one.  There are, of course, good rational reasons for rejecting standardized practices as limiting.  Of course, none of this is going to help me persuade my auditors to throw away their checklists, but I’d rather have happy developers than an ISO 9001 certification.  And believe it or not, I had one of the latter once…

Anyway, the computer arrives tonight, just in time for Lucid Lynx.  It’s downloading as we speak.  🙂

Technorati Tags:

Decorators, Yadic and Castle Windsor

You’ve probably never heard of Yadic.  It’s the smallest DI container I’ve seen.  I’ve linked directly to the only source file because it’s a good read, even if Google Code can’t do syntax colouring for F#.  It’s more opinionated than a party political conference and has some neat ideas.  One of them is the concept that you can specify that a service be decorated.

Let’s just run through that again: you can take a service and stick a decorator on it.  Now, you may be thinking that you can already do this in most containers, and you can but it’s slightly painful.  You need to rename the original instance, register the new instance under the old name and then set up an explicit dependency for the constructor parameter.  It gets worse if the original registration didn’t have a name.  (Yadic, of course, doesn’t allow you to have multiple implementations of the one service.  I did say it was opinionated…) 

Castle Windsor, the container I know best, already has this feature, but it doesn’t know it.  The interceptors feature provides the same functionality, but ties it to the use of DynamicProxy.  As a consequence, it tends to be seen as an extremely technical aspect of the implementation.  It exposes the “How”, but not the “What” or the “Why”.

It also demonstrates something quite surprising: there’s still room for innovation in the IoC container space.  Containers haven’t changed much in the last five years, but they’re much less of a solved problem than it might appear and there’s still a need for revolutionary thinking.  Obviously, I’ve got form on this, but I don’t think the container of 2020 will look much like the container of 2010.  If any of us are still using statically typed languages, that is…

Postscript: I’ve been telling Christian to blog for some time, but to date he’s only made one post.  Mark Needham has tried a new approach to getting his thinking to a wider audience.  It’s got merits… 🙂

Technorati Tags: ,,

Rob Eisenberg’s Co-routine Trick in Retlang

You really owe it to yourself watch Rob Eisenberg’s amazing MVVM talk and download the source code.  There are so many neat things in this talk it’s hard to know where to start.  Basically, he’s written a short, understandable, piece of code that shows how to develop Silverlight applications in a manner with which an ASP.NET MVC developer like myself can be comfortable.  I really liked the “co-routine trick”, so I thought I would write something explaining it in greater depth.

The basic trick is to use yield return’s “co-routine like” execution form an IEnumerable sequence of what code to run on which thread.  Each action, when it completes, sets up the execution of the next action.  This is a bit disguised in the rest of the talk’s code, so I’m ripping it out to show it directly.  It also more clear with consistent threading model, so I use Retlang fibers to follow the thread of execution.  (You’d get other benefits as well, but this post is long enough as it is.)

First off, I’m going to outline the basic form of the trick, as close to Rob’s original as I can managed.  Then there’s a short program that uses it.  Finally, I’ll show you an alternative version that’s arguably even more elegant.

You’re Going To Like It

So, what does Rob’s IResult look like in Retlang?

public interface IResult {
    void Execute();
    IDisposingExecutor Fiber { get; }
}

So you got some code and a “thread” to run it on.  The example code lower down has the obvious dumb implementation, rather than the Command/Query framework that’s in Rob’s code.  Then the co-routine trick looks like this:

public static class CoroutineTrick {
    public static void Execute(this IEnumerable<IResult> results) {
        var enumerator = results.GetEnumerator();
        Action nextAction = null;
        nextAction = () => {
            if (!enumerator.MoveNext()) {
                return;
            }
            var result = enumerator.Current;
            result.Fiber.Enqueue(() => {
              result.Execute();
              nextAction();
            });
        };
        nextAction();
    }
}

The assignment of nextAction to null is a compiler dodge which I recommend ignoring.  Each call to nextAction advances the iterator* and then enqueues the action on the correct fiber.  Then nextAction is called again on the same fiber as the previous action.  (Rob achieves the this effect in the ResultEnumerator class.)

*It helps to remember that MoveNext needs to be called before accessing the first element.

The Demo

Here’s the rest of the code.  It shows a form, moves the progress bar, does a couple of Thread.Sleeps and exits.  As always, note that the UI is responsive the whole time.

class FormProgressReporter : Form {
    private readonly ProgressBar progressBar;

    public FormProgressReporter() {
        progressBar = new ProgressBar();
        SuspendLayout();
        progressBar.Dock = DockStyle.Fill;
        progressBar.Location = new System.Drawing.Point(0, 0);
        progressBar.Name = "progressBar";
        progressBar.Size = new System.Drawing.Size(292, 266);
        progressBar.TabIndex = 0;
        Controls.Add(progressBar);
        Height = 75;
        Width = 600;
        ResumeLayout();
    }

    public void Report(int nodesProcessed, int nodesEncountered) {
        Text = string.Format("{0}/{1}", nodesProcessed, nodesEncountered);
        SuspendLayout();
        progressBar.Maximum = nodesEncountered;
        progressBar.Value = nodesProcessed;
        ResumeLayout();
    }

    protected override void OnShown(EventArgs e) {
        Actions().Execute();
    }

    IEnumerable<IResult> Actions() {
        yield return Update(10, 100);
        yield return Calculate(() => Thread.Sleep(2000));
        var nodes = 20;
        yield return Update(nodes, 100);
        yield return Calculate(() => {
                                       nodes = 75;
                                       Thread.Sleep(2000);
                                   });
        yield return Update(nodes, 100);
        yield return Calculate(() => {
            nodes = 100;
            Thread.Sleep(2000);
        });
        yield return Update(nodes, 100);
        yield return Calculate(() => Thread.Sleep(1000));
        yield return new Result(() => Hide(), Program.ui);
    }

    IResult Calculate(Action action) {
        return new Result(action, Program.worker);
    }

    IResult Update(int nodesProcessed, int nodesEncountered) {
        return new Result(() => this.Report(nodesProcessed, nodesEncountered), Program.ui);
    }
}

public static class Program
{
    internal static IFiber ui = null;
    internal static IFiber worker = new PoolFiber();

    static void Main()
    {
        Application.EnableVisualStyles();
        Application.SetCompatibleTextRenderingDefault(false);
        var form = new FormProgressReporter();
        ui = new FormFiber(form, new BatchAndSingleExecutor());

        ui.Start();
        worker.Start();
        form.ShowDialog();
    }
}

class Result : IResult {
    private readonly Action action;
    private readonly IDisposingExecutor fiber;

    public Result(Action action, IDisposingExecutor fiber) {
        this.action = action;
        this.fiber = fiber;
    }

    public void Execute() {
        action();
    }

    public IDisposingExecutor Fiber {
        get { return fiber; }
    }
}

You’ll notice that I’m quite dependent upon static variables, which I hate.  However, the original code also relies on this at the moment, and I wanted to produce something that looked as similar as possible, within the constraints of including the entire thing in a single blog post.  The static dependency is fixable, although you’ll have to lose Rob’s implementation of AsResult via extension methods.

Using Sensible Defaults

The code above is a quick hack to demonstrate the idea, but once you understand it there’s even more that can be done.  You could, for instance, modify the code to always execute the MoveNext on a specific fiber.  This could actually be quite elegant: you could always assume code between “yield return”s was on the UI thread (or never).  Here’s how it could look like that:

public static void Execute(this IEnumerable<Action> results, 
IDisposingExecutor worker, IDisposingExecutor ui) { var enumerator = results.GetEnumerator(); Action nextAction = null; nextAction = () => { if (enumerator.MoveNext()) { var command = enumerator.Current; worker.Enqueue(() => { command(); ui.Enqueue(nextAction); }); } }; ui.Enqueue(nextAction); }

At the loss of some flexibility, we’ve now got something that runs yield return actions on worker threads, but everything else on the UI thread.  (A single UI thread is a reasonable assumption most of the time.)  The actions code would then read:

IEnumerable<Action> Actions2() {
    Report(10, 100);
    yield return () => Thread.Sleep(2000);
    var nodes = 20;
    Report(nodes, 100);
    yield return () => {
        nodes = 75;
        Thread.Sleep(2000);
    };
    Report(nodes, 100);
    yield return () => {
        nodes = 100;
        Thread.Sleep(2000);
    };
    Report(nodes, 100);
    yield return () => Thread.Sleep(1000);
    Hide();
}

This model looks even more like we’re using continuations, and we no longer need specialized types.  We might still want them: Rob’s framework mixes this in with its command/query infrastructure.  The beauty of this is: you don’t have to choose.  There’s nothing stopping you supporting both solutions within a framework.

Happy Easter.  🙂

Technorati Tags: ,

The Law of Demeter: Context Matters

It’s funny, everyone knows the Law of Demeter, but everyone still seems to think it’s about dots.  There seem to be a million definitions of it, but here’s mine:

Don’t pass in the wrong object.

Laying down the law

Now, Derick’s saying extension methods don’t count.  I’d go further: they don’t matter at all.  The Law of Demeter has nothing to do with how you get an object, just whether or not you started with the right object.  Derick shows some code that is described as making Demeter scream.  (It’s a projection from a variable “assets” to “asset”.)  He shows a version of the code that looks completely different and argues that if that code is okay, so is the original code.  That’s completely correct, but neither has told us anything about whether or not “assets” was the right parameter to the function in the first place.  Here’s the mental rules of thumb:

  • Is “assets” used anywhere in the function other than to generate “asset”?
  • If not, is the principal purpose of the function to return “asset”?

If the answer to the first question is yes, then the function properly depends upon assets and Demeter is happy.  Equally, if the function is a principally a projection or an overload, there’s not a problem: assets is still the correct subject and asset is the result.  It’s when you answer “no” to both questions that you’ve got a problem.  In other words, you’ve violated SRP: the function now projects “assets” to “asset” and then does something else with asset.

If you think of it from a testing perspective, you can apply a different measure.  If you were writing tests for this code, would you start to get annoyed at having to construct the list of assets each time?  If most of your tests had only one item in the list, that’s a bad sign.  So, in answer to Derick’s basic question (does the code violate LoD?) I’d have to answer: I don’t know, I need to see the rest of the code.

More About Python and the Interface Segregation Principle

One of the joys of blogging is that you occasionally discover people thoughtfully and politely reducing your arguments to shreds.  I recently came across an article by William Caputo on the subject of my discussion with Ryan back in November.*  I’ll try to summarize the original discussion:

  • Ryan contended that using Python fundamentally changed the principles of OOP.
  • I argued that the SOLID principles still held.

Now, in my original article, I accepted that dynamic languages helped ameliorate the sharp edges of statically typed languages.  Importantly,

  • Python’s constructor syntax means that any constructor is effectively an implicit abstract factory.  (This advantage is unique to Python, Ruby is nowhere near as slick in this respect.)
  • The dynamic nature of Python means that your interaction surface with another class is exactly those methods you call, no more no less.

Now, in certain aspects, I assumed that certain principles became less important simply because the language took some of the burden.  William, however, has pointed out that I was wrong.*

The thing is, we were both concentrating on one aspect of SOLID here: statically typed languages have fairly high friction related to their type system that can render code brittle.  We therefore have practices closely associated with SOLID principles that are pretty much the only way to keep code flexible in languages in C#.  These practices, such as always creating an interface to go with an implementation, are themselves a form of friction which Ryan was arguing was unnecessary in Python.

As William points out, that’s a good benefit of SOLID; it’s not the whole story.

The Interface Segregation Principle: It's not rocket science.

ISP Isn’t About Code

Imagine you’ve got a space station.  This station gets visited by two kinds of ships: shuttles, which carry people, and refuelling tankers.  Now, the requirements for the shuttle’s docking interface are quite large: you’ve got to be able to comfortably get a stable human shaped hole between the two for an extended period of time.  Refuelling, on the other hand, is carried out by attaching a pipe to the tanker.

Now imagine that you were told that both ships needed to use the same connector.  You’d end up with a massively overcomplex connector.  Now, this metaphor works perfectly well if you consider the space station to be exposing a single IConnector interface and the ships to be consuming classes.  However, William’s first point is that actually, it still holds for data feeds, web services, any interaction between two systems.  Indeed, the ISP does in fact, apply to space stations.  In many ways, interfaces are cheap in code.  But in third party integration, it’s expensive and so the ISP is more important.  Something to bear in mind the next time you try to reuse the webservice you built for the last client.

Just Because You Can, Doesn’t Mean You Should

Since I’m interviewing at the moment, I’m getting heartily sick of hearing the phrase “an interface is a contract”, but it’s relevant in this context.  In a statically typed language the contract is fixed and enforced by the consumed class.  Because of this friction, often you get an interface that is larger than it should be because it’s trying to be forgiving enough to handle multiple clients.  ISP says you should be doing the opposite: having interfaces for each consumer.  In a dynamic language, the consumed class can’t enforce the contract.  However, that doesn’t remove the concern, it just rebalances the responsibilities. 

Returning to the space station, imagine if you allowed a ship to attach itself to any part of the hull.  That would certainly help with adding in new types of vessel to the mix.  The problem would come when you wanted to change the space station itself.  Maybe those solar panels aren’t very useful anymore and you’d like to get rid of them.  Unfortunately, it turns out that there’s a visiting space monster that wraps its tentacles around the panels.  You don’t want to upset the monster, so you end up leaving the useless panels on the station.

Reducing Entanglement

This is the danger in dynamic languages.  In a statically typed language, the space monster wouldn’t have been able to visit at all without work on part of the station.  However, if we observe the ISP, we still have to do the work.  Equally, the space monster needs to be responsible and not just attach itself to anything that provides purchase.  To put it more formally, the consumed class still needs to export an interface the consuming class is going to find useful, and the consuming class has avoid taking unnecessary dependencies.  The expression of the problem may be different, but the concerns and the principle remains. 

I originally said that because Python automatically keeps interface surfaces as small as the developer is actually using there wasn’t much you could do about ISP in Python, but in fact that’s not the case.  Interaction interfaces between classes can still be made smaller, they can still be made more role specific.  You can still attempt to create Unified Modelzilla in Python, and it will be as bad an idea as it was when you tried it in J2EE.   In many ways, paying attention to ISP is more important in Python than it is with a statically typed language.

*If you want to read it, William’s article is on his home page dated 21 November. I’m afraid I don’t have a permalink.

Decorator Pattern: The Leaking This Problem

With all of the substitution patterns, the principle is that the proxied target doesn’t need to be aware of the proxying object.  That’s pretty achievable if what you’re trying to do is provide a local proxy to a remote object.  However, when you’re using a decorator, things get a bit trickier.  Welcome to the “leaking this” problem.

This is leaking

To start with an easy example, what if the target does this:

return this;

What do you do?  Well, it’s not obvious, but typically you’ll get the decorator to return itself.  This case is relatively easy to spot.  But how about if it does this

return this.AnotherMethodOnInterface();

Here you can’t intercept the call at all.  Maybe you didn’t want to, but this is the case in which inheritance can actually be more useful than composition.  But there’s an even worse case:

return new SomeOtherObject(this);

Okay, well your decorator can give SomeOtherObject a decorator as well, and often that’s what you wanted to do.  But sometimes you actually wanted SomeOtherObject to take a dependency on the decorator, and that can’t be achieved.  Using a factory doesn’t help, since it’s typically a constructor dependency and as such unaware of the decorator.*

It just gets worse.  What if your target raises an event?  You’re going to have to make sure the sender points to the decorator.  The target could stick itself into a global variable (ugly, but possible).  So what’s the solution?  Here’s the thing: there isn’t one.  There are solutions for specific cases, but there’s no general way of replacing an object with a decorated version.  Sometimes, you’ve just got to redesign your target object to make sure you get the behaviour you want.

Why Isn’t This a Problem For Remote Proxies?

You might be thinking “but I’ve been using Remoting/WCF/RMI for years and I’ve never had a problem”.  And you’d be right.  The thing is: proxies don’t change behaviour, so you never encounter a position in which using the unproxied version would cause an issue.  The original object stays on the server, the proxy stays on the client.  If you take a look at the examples above, it’s really easy to answer the question “What should the proxy do?”

If you think that it’s painful to deal with hand-written decorators, wait until you try to build a framework for decorators.  Castle’s InterfaceProxyWithTarget and InterfaceProxyWithTargetInterface** methods are exactly that: general ways of writing decorators.  Anyone who uses DynamicProxy runs into the problem sooner or later.  The knee jerk reaction is that there’s something wrong with DynamicProxy.  Later on you realize it’s a limitation of the programming language: there’s simply no expressing what you actually wanted to achieve.

*You could pass the factory in as a function parameter, but you’d typically have to redesign your target to achieve this.

**Read about the difference on Krzysztof’s blog.  Read about how leaking this pertains to Dynamic Proxy specifically here.

Open Source Should Be Read/Write

Linus Torvalds is a busy guy.  Not content with pushing badge-name Unix vendors into the land of the legacy system, and creating an operating system that’s nigh on won the server market (It’s not looking too shabby on mobile and embedded systems, either), he somehow managed to find the time to revolutionise source code management as well.  Anyone who’s jumped straight between Subversion* and git will know quite how radical a change it is.  But the reason I love it is that it’s fundamentally changed the mindset of open source software.

You see, although open source projects were meant to be collaborative, the repositories are necessarily authoritative.  Who has access to it was necessarily limited by the governing body (or person).  If you aren’t on the approved list, you had to prove yourself before you got access.  So, imagine you’re using Fluent NHibernate and discover that you can’t make a composite primary key that references another composite key (true at the time of writing).  You download the code and make the change.  But now what are you going to do?

  • Stick together a patch (you need to change all of five lines, IIRC) and hope that the overworked maintainers get around to incorporating it before you need to download the code again.
  • Change the code, and keep your own copy of the code.  Stop updating your version, losing all subsequent improvements to the main branch.

If Fluent NHibernate was on Subversion, those would be your only options.  However, Fluent NHibernate is on GitHub, so you can create a quick fork, make the change and send a pull to the maintainers.  They can choose to incorporate the change or ignore it, but you can keep updating your version in line with their changes.  Git and GitHub are part of a revolutionary democratisation of the open source software development process.  The whole mindset is different.  Before, a fork was something you had to justify carefully and set clear blue water between you and the maintainers.  You set yourself up as an authority in competition with them.  On GitHub, you fork because you’re interested, it’s a compliment to the maintainers not a challenge. 

Power To The People

The great thing with Git and GitHub is the implicit change in approach, but there’s still much more to do.  One of my personal bugbears is the completely impenetrable nature of build systems on .NET.  There’s a reason I keep documenting build instructions, and there’s a reason that this is one of the most popular articles on my site, entirely from natural search.  Those articles typically were the result of well over a day’s work.  If you’re thinking that everyone should spend a week of pain before they can build your project, you’re an elitist and I suggest you consider whether you’re on the right side of the argument. 

For that matter, exactly where do I download the binaries?  Every project has a different answer.  We’ve just got friction on top of friction here.  We’ve got nothing even remotely resembling Gems in ALT.Net (why exactly that is would probably take another 900 word post…).

My First OSS Project

My first try at an OSS project was an unmitigated disaster.  The idea was really simple: provide a tool that would download and build common .NET open source packages.  Sounds easy enough unless you’ve ever tried it.  Just figuring out a reliable command line way of installing ruby was a challenge (not sure I ever exactly nailed it).  The Horn guys gave up as well and started to redirect the project towards something they could actually build.  The SymbolSource guys tried to address another problem with not being able to build the code: the fact that you can’t sensibly debug it.

But, with all due respect to the Horn and SymbolSource projects, they’re just introducing some extra authorities to help me with some of the problems of my existing authorities.  What I, and most developer, want and need, is to be able to easily build their own versions of open source projects, whatever version they like (including trunk).  You want to work with some code, discover a problem, step through it, figure out what’s going on and fix it,

Here’s an idea: how about open source projects have solution files which you can get straight out of source control that actually build?**  Radical, I know.  The Castle project can do that, and goes to a fair amount of effort to achieve that.  nServiceBus is way off.  If you had that, combining versions would be a cinch: just merge the solutions together and away you go.

GitHub’s great, but it’s only part of the story.  There’s still a lot more to do before Open Source is truly open to all.  It’s the right thing to do, and it sure beats complaining about how the average developer doesn’t care about ALT.NET.

*or worse, I’m using TFS…

**There’s a couple of technical issues like what you do with parallel Silverlight and .NET versions, but at worst that’s solvable with a quick batch file entitled “RunMeAfterPerformingGetLatest.cmd”.

Update: I’d originally asserted that Castle Core didn’t 100% build from the solution.  Roelof Blom pointed out on Buzz that this was incorrect.  I’m happy to correct the article.

Postscript: If you want to know more about the poster, read the full article at the International Museum of Women.

 

Substitution Patterns in Pictures

I’ve said it before and I’ll say it again.  I hate UML.  That doesn’t mean there aren’t pictures that can say 1000 words.  Here’s approximately 6000 words on the subject of substitution patterns.  Substitution patterns are patterns where you want functionality to expose a certain interface, but need some sort of bridging logic to achieve that.

One nice thing about all of these patterns is that they describe an object.  Patterns that involve several actors are harder to name and explain.  Here, you’ve got the object that implements the pattern, and the target.  The target never knows anything about the pattern (which is why I haven’t labelled them in the diagrams).

Proxy

A proxy is an object that behaves exactly like its target.  Usually it only exists to cross a machine or process boundary.  WCF and RMI create proxies all of the time.  I’ve written more about proxies before, but I ran with the more colloquial use of the term, where proxy and decorator are basically the same thing.  I’m using the formal terminology here, but it’s still probably time we just accepted that common practice treats them as the same.

Proxy Pattern 

Adapter

A square peg in a round hole.  Used typically to deal with interfacing issues.

Adapter Pattern

Decorator

A round peg in a round hole.  Decorator differs from proxy in that it changes behaviour.

Decorator Pattern

Circuit Breaker

Not a classic Gang of Four pattern, although it’s technically a special case of decorator.  Here you have two implementations: a primary implementation and a fallback implementation.  Typically, the primary implementation is a remote proxy and the fallback is a dummy class that just throws exceptions, but obviously other arrangements are possible. The circuit breaker flips between the two depending on the behaviour of the primary implementation.  If connectivity goes down or performance degrades unacceptably, subsequent calls are routed to the fallback until the circuit breaker decides to try the primary again.  This prevents the calling system from contributing to the load on the remote system.  There’s a good implementation using Castle DynamicProxy over on Davy Brion’s blog (it conflates the circuit breaker with the fallback).

Circuit Breaker Pattern I’ve marked the Fallback target in this case because it is an actor in the pattern.  The remote target is still unaware of the circuit breaker.

Composite

Multiple round pegs, one round hole.  Not really anything to do with trees.

Composite Pattern

Façade

Loads of pegs, lots of shapes, one round hole.  Used to simplify a subsystem.

Facade Pattern

Solution Transform Now Supports Round Tripping to VS2010 and .NET 4.0

Headline says it all.  I’ve spent a fair bit of time recently knocking the code base into shape and adding minor details which sound dull but reduce friction when you’re using it, such as the ability to explicitly remove an assembly reference.  But more exciting is the ability to switch between VS2010 and VS2008.  This is a “dumb conversion”, it doesn’t modify your CS files (personally I think that’s what you want).  It does, however, tweak assemblies.

Some example command lines:

Convert to VS2010:  SolutionTransform Retarget –solution %cd%InversionOfControl-vs2008.sln –ide vs2010 –target dotnet40 –rename –VS2010

Convert back:  SolutionTransform Retarget –solution %cd%InversionOfControl-vs2010.sln –ide vs2008 –target dotnet35 –rename –VS2008

Convert to Silverlight 3.0:  SolutionTransform Retarget –solution %CD%InversionOfControl-vs2008.sln –target silverlight30 –rename -Silverlight –assemblyPaths ..libsilverlight-3.0

Now, obviously, if you’re migrating your project to VS2010, you’ll be using the Visual Studio upgrade wizard.  However, this isn’t for that: it’s for including in build scripts so that you can keep parallel versions of your code running without pain.  Code is found on GitHub, and the documentation on the Castle Wiki.

Technorati Tags: