Metrics: Signal to Noise

Okay, I promised I’d stop talking about metrics, but it occurred to me that there’s a serious point I hadn’t covered: sometimes your measures are broken, and you need to fix the measure before it’s going to tell you anything useful.  The classic example of this is error reporting: if you’re reporting too many errors, you won’t notice when something serious is going on.  I once saw a system that generated 2400 errors a week.  That’s under unexceptional circumstances.  Assuming you spent 30 seconds on each error, that would take over half your week.

It doesn’t take a genius to figure out that no-one was looking at those errors.  So, if there were a couple of really serious problems occurring, no-one would ever know.  Well, not until they started to get the phone calls.

Thing is, the code entanglement statistics from NDepend look like that: I can pretty much guarantee that 95% of them aren’t a problem.  Thing is, you’re never going to know unless you go through them all.  Oren’s random sampling doesn’t really prove anything other than there’s a lot of noise.  It doesn’t mean there isn’t some signal there.

Personally, I find the idea of code metrics fascinating, but I can’t help feeling that we’re still a long way from having usable ones.  An NDepend-driven refactoring might yield genuine results, or it might be a complete waste of time.  This doesn’t mean that Patrick shouldn’t be trying to improve the state of the art.

Technorati Tags: ,

Published by

Julian Birch

Full time dad, does a bit of coding on the side.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s