In Praise of Postmodernist Programming

So, one of those dear people on Twitter who have been trying and failing to be Twitter’s Main Character for years published a rant about post-modernist programming. Now, let’s leave aside the fact that they’ve literally created a fictional concept that sounds like it’s from a dril tweet to get angry about. Let’s actually engage with the fundamental narrative and point out how dumb it is in the context of programming.

What is Postmodernism Anyway?

Postmodernism is a movement that produced some really striking visual art and some utterly unreadable prose, but the fundamental concept of it was a rejection of the certainties of the modernist movement. Which is kind of hilarious because that was itself a rejection of the certainties of the enlightenment. Yes, I cribbed that from Wikipedia. This is very different from our dear failed main character’s definition of Postmodernism, which is kind of ironic since he was lecturing us about the absolute certainty of knowledge.

FMC’s contention is that there are certainties in programming, these are obvious and, implicitly, that these are important and can be extended by logical deduction into broad areas. Also, typically that they’re in a good place to determine these eternal truths and that if you disagree with them you’re wrong. This is a rhetorical framework shared between tradcaths, biblical fundamentalists and constitutionalist originalists that reaches back as far as Plato (and probably further) that, sadly, hasn’t actually fared very well since people started doing science. It turns out that it really seriously matters what your first principles are, incontrovertible ones are thin on the ground and everything, absolutely everything, needs to be checked empirically. Logical reasoning is only as good as your assumptions and it turns out that your assumptions are nearly always faulty when tested against the real world. There’s a tendency then to try to conform the world to your assumptions rather than the other way around. This is as much a problem in politics as it is in systems design.

Let’s Beat This Strawman Like A PiƱata

So, in spirit of the discourse, I’m going to set up a strawman, knock it down and then act like I’ve proved something. Here’s my naively true, but actually easily falsified statement:

An O(n) algorithm is better than an O(n^2) algorithm

Me, obviously

So, the first question is “better how”? There’s an obvious subtlety to do with time/space complexity but that’s not even scratching the surface. Let’s say space doesn’t matter. Is the O(n) faster? Well, constant values can matter and it depends on your workload. Again, we’ve left the realms of pure logic and entered the realms of science and engineering. Then there’s the question of whether or not it matters at all, our dear originator’s big beef with “postmodernism”. Is the difference between the two twenty nanoseconds in a 5 second process? Then it honest to God doesn’t matter. (Well, probably, but if you’re arguing with me on this you’ve accepted my broader point, like an intellectually dishonest Catch-22.)

Let’s say it is slower to a significant extent. How complex is the faster algorithm? How easy is it to maintain? How easy is it to verify it doesn’t have any bugs? How much time will it take to develop and PR? Does it operate without locks while the faster algorithm requires us to lock the data structure? Do either of these algorithms actually solve the problem we have?

None of these questions are answerable without context, which I have pointedly failed to provide. But it’s my contention that the decisions any serious developer has to deal with every day look more like this: there’s no easy answers, things need to be thought through. It’s not that nothing matters, as that everything matters, and there are 100 concerns including, requirements capture, project velocity and team health that are an awful lot more important than whether you’re using a ring buffer correctly.

Postmodernist Programmers

So there you have it, a clickbait response to a clickbait tweetstorm shorn of all context because apparently truth is universal so you shouldn’t need it. If you actually want a considered and thoughtful response to the original screed, Dan North took down the whole mentality fourteen years ago. A guy who will happily tell you, if you ask, that he 100% believes there are eternal, immutable truths. Context is everything, nothing matters without it and there’s a whole raft of uncertainties we just have to navigate. So yeah, I’m proud to be a postmodernist programmer. Although I prefer the more vernacular term: a good one.

Don’t Write a Programming Language

So my domain name just renewed and entertainingly charged my wife’s credit card (I have no recollection as to how exactly that happened.) and since I haven’t posted in three years I figured it might be time I actually justified the blog’s existence. Also, frankly, I’ve been on something of a polemic tear for several days. Much of which, I’m afraid, is on a company intranet with large numbers of references to internal system so even if you could read them it would be hard to disagree with me. But this particular observation, won the incredibly painful way, seemed general enough to put in a blog post.

So, having opened my first blog post in three years with an entirely irrelevant digression, let me TL;DR; what I’m going to say here:

  • Don’t write a small programming language.
  • That includes DSLs.
  • Use a real programming language.

I think implementing tiny languages or DSLs are kind of a standard mistake anyone who loves coding makes. What’s not to like about writing code that allows you to write more code? I’ve made this mistake, sometimes accidentally, more than once. I must not be a fast learner. These days if I’m called to implement e.g. templating I am very careful to lock down the functionality to the most useful, extremely limited, features and let people just write custom code to deal with anything else. I strongly believe that if you are getting to the point where you want to implement:

  • if statements,
  • variable substitution,
  • or, heaven preserve us, for loops

You want to use a real programming language that’s maintained by professionals, possibly by embedding it into your code. If not, redesign your requirements so you don’t need that kind of complexity. It always sounds much simpler than it will prove to be. (See also: maintaining an open source library people actually use.) This is one of the reasons I champion dotnet script: it’s a proper programming language with the affordances you need to keep your code under control.

This, incidentally, goes double for complex configuration files. You may not think of their implementation as a rules engine written in an ad-hoc language, but the maintenance problems you encounter will not care what mental model you’re using.

As you can probably tell, I feel pretty strongly about this. I have nothing but respect for people like Gabriella Gonzalez who have done this and pulled it off but I’m also pretty sure she knew exactly what she was getting into. I also don’t believe reading this is going to stop anyone from going down this road. My hope, rather, is that someone who’s read this will, after the first time they’ve burnt themselves creating a DSL that they’ve gradually grown to hate, they won’t do what I did and try again.